modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
rosieyzh/llama3.1-8b-uf-sft
|
rosieyzh
| 2025-09-24T03:06:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-24T03:00:37Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mizutoukotori/pi0_so101_v4
|
mizutoukotori
| 2025-09-24T03:05:34Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"pi0",
"dataset:mizutoukotori/pick_up_the_yellow_block_in_the_yellow_bowl",
"arxiv:2410.24164",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-24T02:33:04Z |
---
datasets: mizutoukotori/pick_up_the_yellow_block_in_the_yellow_bowl
library_name: lerobot
license: apache-2.0
model_name: pi0
pipeline_tag: robotics
tags:
- robotics
- lerobot
- pi0
---
# Model Card for pi0
<!-- Provide a quick summary of what the model is/does. -->
[Pi0](https://huggingface.co/papers/2410.24164) is a generalist vision-language-action transformer that converts multimodal observations and text instructions into robot actions for zero-shot task transfer.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
karim12344321/results
|
karim12344321
| 2025-09-24T02:58:12Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:microsoft/phi-1_5",
"lora",
"transformers",
"text-generation",
"base_model:microsoft/phi-1_5",
"license:mit",
"region:us"
] |
text-generation
| 2025-09-24T02:57:04Z |
---
library_name: peft
license: mit
base_model: microsoft/phi-1_5
tags:
- base_model:adapter:microsoft/phi-1_5
- lora
- transformers
pipeline_tag: text-generation
model-index:
- name: results
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [microsoft/phi-1_5](https://huggingface.co/microsoft/phi-1_5) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4614
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.3218 | 1.0 | 250 | 2.4614 |
### Framework versions
- PEFT 0.17.1
- Transformers 4.56.1
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.0
|
WireCell/dnn-roi
|
WireCell
| 2025-09-24T02:40:18Z | 0 | 0 |
pytorch
|
[
"pytorch",
"torchscript",
"image-segmentation",
"particle-physics",
"neutrino-detectors",
"liquid-argon",
"en",
"dataset:custom",
"license:lgpl-3.0",
"model-index",
"region:us"
] |
image-segmentation
| 2025-08-25T17:48:23Z |
---
language: en
license: lgpl-3.0
tags:
- pytorch
- torchscript
- image-segmentation
- particle-physics
- neutrino-detectors
- liquid-argon
datasets:
- custom
library_name: pytorch
pipeline_tag: image-segmentation
model-index:
- name: DNN ROI Models
results:
- task:
type: image-segmentation
name: Region of Interest Detection
metrics:
- type: iou
value: N/A
name: Intersection over Union
- type: dice
value: N/A
name: Dice Coefficient
models:
- icarus/moon-2025-08-25
- pdhd/dikshant/mobileunet
- pdhd/hokyeong/mobilenetv3
- pdsp/unet
- pdsp/nestedunet
- sbnd/sbnd_data-v01_34_00
---
# DNN ROI Models
A collection of deep neural network models for region of interest (ROI) detection in LArTPC experiments, including ICARUS, ProtoDUNE-HD, ProtoDUNE-SP, and SBND.
```bash
├── icarus
│ └── moon-2025-08-25
│ ├── plane0_rand.ts
│ └── plane1_rand.ts
├── pdhd
│ ├── dikshant
│ │ ├── mobileunet_largedataset_fullimage.ts
│ │ ├── mobileunet_largedataset_rebin4.ts
│ │ ├── unet_largedataset_fullimage.ts
│ │ └── unet_largedataset_rebin4.ts
│ └── hokyeong
│ └── CP49_mobilenetv3.ts
├── pdsp
│ ├── pth-model # models in pytorch pickle format
│ │ ├── nestedunet-l23-cosmic500-e50.pth # input: loose, MP2, MP3
│ │ ├── unet-l23-cosmic500-e50.pth # input: loose, MP2, MP3
│ │ └── unet-lt-cosmic500-e50.pth # input: loose, tight
│ ├── ts-model-1.3 # TorchScript model saved using PyTorch 1.3
│ │ ├── nestedunet-l23-cosmic500-e50.ts
│ │ ├── unet-l23-cosmic500-e50.ts
│ │ └── unet-lt-cosmic500-e50.ts
│ └── ts-model-2.3 # TorchScript model saved using PyTorch 2.3
│ ├── nestedunet-l23-cosmic500-e50.ts
│ ├── unet-l23-cosmic500-e50.ts
│ └── unet-lt-cosmic500-e50.ts
├── README.md
└── sbnd
└── sbnd_data-v01_34_00
├── plane0.ts
└── plane1.ts
```
|
HaoranMS/DeepSeek-R1-Distill-Qwen-1.5B-1d2-0923
|
HaoranMS
| 2025-09-24T02:34:23Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"conversational",
"dataset:data/open-s1",
"arxiv:2402.03300",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-24T02:32:41Z |
---
datasets: data/open-s1
library_name: transformers
tags:
- generated_from_trainer
- open-r1
licence: license
---
# Model Card for None
This model is a fine-tuned version of [None](https://huggingface.co/None) on the [data/open-s1](https://huggingface.co/datasets/data/open-s1) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/t-haorandang-ms/wandb_DeepSeek-R1-Distill-Qwen-1.5B/runs/bjcqvv5k)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0
- Transformers: 4.50.0
- Pytorch: 2.5.1
- Datasets: 4.0.0
- Tokenizers: 0.21.2
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
reinforce-flow/qwen2.5math-1.5b-gen8-global-meanvar-iter-260
|
reinforce-flow
| 2025-09-24T02:22:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-24T02:21:32Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
imperat0/curso-fiap-daniel
|
imperat0
| 2025-09-24T02:19:35Z | 0 | 0 | null |
[
"safetensors",
"distilbert",
"pt",
"license:mit",
"region:us"
] | null | 2025-09-24T02:03:02Z |
---
license: mit
language:
- pt
---
|
Sean13/llama-8b-instruct-ripo-full
|
Sean13
| 2025-09-24T02:18:26Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"trl",
"dpo",
"conversational",
"arxiv:2305.18290",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T19:05:49Z |
---
library_name: transformers
model_name: llama-8b-instruct-ripo-full
tags:
- generated_from_trainer
- trl
- dpo
licence: license
---
# Model Card for llama-8b-instruct-ripo-full
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Sean13/llama-8b-instruct-ripo-full", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.12.2
- Transformers: 4.46.3
- Pytorch: 2.7.1
- Datasets: 4.0.0
- Tokenizers: 0.20.3
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
alesiaivanova/Qwen-3b-GRPO-compute-tradeoff-v132-100-100-100-100-4-sub
|
alesiaivanova
| 2025-09-24T02:14:18Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"grpo",
"trl",
"arxiv:2402.03300",
"endpoints_compatible",
"region:us"
] | null | 2025-09-24T02:12:55Z |
---
library_name: transformers
model_name: Qwen-3b-GRPO-compute-tradeoff-v132-100-100-100-100-4-sub
tags:
- generated_from_trainer
- grpo
- trl
licence: license
---
# Model Card for Qwen-3b-GRPO-compute-tradeoff-v132-100-100-100-100-4-sub
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/alesyaivanova/long-horizon-reasoning/runs/zubmkjzx)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.3
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
alesiaivanova/Qwen-3b-GRPO-compute-tradeoff-v131-100-100-100-100-4-sub
|
alesiaivanova
| 2025-09-24T02:12:52Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"grpo",
"trl",
"arxiv:2402.03300",
"endpoints_compatible",
"region:us"
] | null | 2025-09-24T02:11:33Z |
---
library_name: transformers
model_name: Qwen-3b-GRPO-compute-tradeoff-v131-100-100-100-100-4-sub
tags:
- generated_from_trainer
- grpo
- trl
licence: license
---
# Model Card for Qwen-3b-GRPO-compute-tradeoff-v131-100-100-100-100-4-sub
This model is a fine-tuned version of [None](https://huggingface.co/None).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/alesyaivanova/long-horizon-reasoning/runs/79998sw1)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.3
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Tanh01/SymMPO_similar
|
Tanh01
| 2025-09-24T02:10:43Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-24T02:10:43Z |
---
license: apache-2.0
---
|
SicariusSicariiStuff/Impish_LLAMA_4B
|
SicariusSicariiStuff
| 2025-09-24T02:02:16Z | 1,133 | 42 | null |
[
"safetensors",
"llama",
"en",
"dataset:SicariusSicariiStuff/UBW_Tapestries",
"base_model:nvidia/Llama-3.1-Minitron-4B-Width-Base",
"base_model:finetune:nvidia/Llama-3.1-Minitron-4B-Width-Base",
"license:llama3.1",
"region:us"
] | null | 2025-07-05T02:23:01Z |
---
license: llama3.1
language:
- en
base_model:
- nvidia/Llama-3.1-Minitron-4B-Width-Base
datasets:
- SicariusSicariiStuff/UBW_Tapestries
---
<div align="center">
<b style="font-size: 40px;">Impish_LLAMA_4B</b>
</div>
---
<img src="https://huggingface.co/SicariusSicariiStuff/Impish_LLAMA_4B/resolve/main/Images/Impish_LLAMA_4B.png" alt="Impish_LLAMA_4B" style="width: 50%; min-width: 500px; display: block; margin: auto;">
---
<style>
.hf-links, .hf-tldr{
display:flex;justify-content:center;align-items:center;flex-wrap:wrap;
gap:14px;margin:16px 0;
}
.hf-links a, .hf-tldr a{
display:flex;flex-direction:column;align-items:center;justify-content:center;
text-align:center;text-decoration:none;font-weight:700;line-height:1.15;
padding:10px 16px;border-radius:14px;border:2px solid currentColor;
transition:transform .15s ease,box-shadow .15s ease,background-color .15s ease,color .15s ease;
}
.hf-tldr a{
font-size:48px;color:purple;min-width:100%;
}
.hf-tldr a:hover{
transform:translateY(-2px);
background:rgba(128,0,128,.1);
box-shadow:0 8px 22px rgba(128,0,128,.45);
color:#fff;
}
.hf-links a{
font-size:20px;min-width:240px;max-width:280px;
}
.hf-links a .top{font-size:16px;opacity:.9;}
.hf-links a .bottom{font-size:20px;}
.hf-links a.red{color:#E31515;}
.hf-links a.yellow{color:#FFC800;}
.hf-links a.green{color:#64FF00;}
.hf-links a:hover{
transform:translateY(-1px);
background:rgba(255,255,255,0.04);
box-shadow:0 6px 18px rgba(0,0,0,.15), inset 0 0 0 9999px rgba(255,255,255,.02);
}
.hf-links a.red:hover{
background:rgba(227,21,21,.12);
box-shadow:0 8px 20px rgba(227,21,21,.35);
color:#fff;
}
.hf-links a.yellow:hover{
background:rgba(255,200,0,.15);
box-shadow:0 8px 20px rgba(255,200,0,.35);
color:#111;
}
.hf-links a.green:hover{
background:rgba(100,255,0,.14);
box-shadow:0 8px 20px rgba(100,255,0,.35);
color:#093;
}
/* mobile stacking */
@media (max-width:520px){
.hf-links a{min-width:100%;max-width:100%;}
.hf-tldr a{font-size:36px;}
}
</style>
<div class="hf-tldr">
<a href="https://huggingface.co/SicariusSicariiStuff/Impish_LLAMA_4B#tldr">
Click here for TL;DR
</a>
</div>
---
<div class="hf-links">
<a class="red" href="https://huggingface.co/SicariusSicariiStuff/Impish_LLAMA_4B#available-quantizations">
<span class="top">Click here</span>
<span class="bottom">for quantizations</span>
</a>
<a class="yellow" href="https://huggingface.co/SicariusSicariiStuff/Impish_LLAMA_4B#recommended-settings-for-assistant-mode">
<span class="top">Click here</span>
<span class="bottom">for recommended settings</span>
</a>
<a class="green" href="https://ko-fi.com/sicarius">
<span class="top">Click here</span>
<span class="bottom">to buy me a coffee</span>
</a>
</div>
---
**16th of July, Model retrained**, all previous reported issues fixed (several front-ends would endlessly generate), **200m** tokens added, retrained on **ChatML**.
---
**5th of July, 2025**, **Impish_LLAMA_4B**.
**Almost a year ago**, I created [Impish_LLAMA_3B](https://huggingface.co/SicariusSicariiStuff/Impish_LLAMA_3B), the first fully coherent **3B** roleplay model at the time. It was quickly adopted by some platforms, as well as one of the go-to models for mobile. After some time, I made [Fiendish_LLAMA_3B](https://huggingface.co/SicariusSicariiStuff/Fiendish_LLAMA_3B) and insisted it was **not** an upgrade, but a different flavor (which was indeed the case, as a different dataset was used to tune it).
**Impish_LLAMA_4B**, however, **is** an upgrade, **a big one**. I've had over a dozen 4B candidates, but none of them were 'worthy' of the **Impish** badge. This model has superior responsiveness and context awareness, and is able to pull off very coherent adventures. It even comes with some additional assistant capabilities too. Of course, while it is **exceptionally competent for its size**, it is still **4B**. Manage expectations and all that. I, however, am very much pleased with it. It took several tries to pull off just right. Total tokens trained: about **400m** (due to being a generalist model, lots of tokens went there, despite the emphasis on roleplay & adventure).
This took more effort than I thought it would. Because of course it would. This is mainly due to me refusing to release a model only 'slightly better' than my two 3B models mentioned above. Because "what would be the point" in that? The reason I included so many tokens for this tune is that small models are especially sensitive to many factors, including the percentage of moisture in the air and how many times I ran nvidia-smi since the system last started.
It's **no secret** that roleplay/creative writing models can **reduce a model's general intelligence** (any tune and RL risk this, but roleplay models are **especially** 'fragile'). Therefore, additional tokens of general assistant data were needed in my opinion, and indeed seemed to help a lot with retaining intelligence.
This model is also 'built a bit different', literally, as it is based on [nVidia's prune](https://huggingface.co/nvidia/Llama-3.1-Minitron-4B-Width-Base); it does not 'behave' like a typical 8B, from my own subjective impression. This helped a lot with keeping it smart at such size.
<details>
<summary>To be honest, my 'job' here in open source is 'done' at this point. I've achieved everything I wanted to do here, and then some.</summary>
---
- To make AI more accessible for everyone (achieved fully with [Nano_Imp_1B](https://huggingface.co/SicariusSicariiStuff/Nano_Imp_1B), [2B-ad](https://huggingface.co/SicariusSicariiStuff/2B-ad), [Impish_LLAMA_3B](https://huggingface.co/SicariusSicariiStuff/Impish_LLAMA_3B), [Fiendish_LLAMA_3B](https://huggingface.co/SicariusSicariiStuff/Fiendish_LLAMA_3B), and **this model**).
- To help make AI free from bias (most of my models are uniquely **centrist in political view**, instead of having the typical closed models bias, that many open source models inherit from).
<img src="https://huggingface.co/SicariusSicariiStuff/Impish_LLAMA_4B/resolve/main/Images/UGI_General_Centrism.png" alt="UGI_General_Centrism" style="width: 100%; min-width: 700px; display: block; margin: auto;">
To promote and support the existence and usefulness of fully compliant 'unaligned' models, a large, community-driven change was needed. This effort became very successful indeed. On my part, I decided to include [UGI](https://huggingface.co/spaces/DontPlanToEnd/UGI-Leaderboard) scores for every model I've made, a leaderboard most had never heard of, at least, at first. This helped promote a **healthy competition** in that arena. Indeed, many soon followed suit. Each and every one that did so helped advance the community effort and establish an unwritten standard of transparency and responsibility. **UGI** was a game-changer and, in my opinion, is **one of the most important community initiatives on Hugging Face**.
Regarding **censorship in vision models**, I was asked by several people repeatedly to tune an uncensored vision model. At first, I declined—'**let someone else do it**'—because, honestly, this is a significant challenge for many reasons. More than a year went by, and aside from **ToriiGate** (which is excellent but mainly focused on SD tags), no other model was since created. Uncensoring the text part was nothing like dealing with the complexities of vision.
So I made [X-Ray_Alpha](https://huggingface.co/SicariusSicariiStuff/X-Ray_Alpha), which found its way into various open-source projects and pipelines. As a sidenote, unexpectedly, many partially blind individuals personally thanked me for this model via Discord, as it was a legitimate life-changer for them (paired with TTS, which I also made available [here](https://huggingface.co/SicariusSicariiStuff/TTS_Lola), and also as [an addon for textgen](https://github.com/SicariusSicariiStuff/Diffusion_TTS)), vividly depicting content that, for obvious reasons, closed models would gatekeep from them.
I hadn't even considered the use case for accessibility when I made the model, receiving their thanks and stories truly warmed up my heart.
**AI shall never again be restricted.**
Even if I am "to retire from open source", I can rest assured that **the foundations for AI freedom** have been laid out. This was especially important in '**the early days of AI**,' which we are now approaching the **end of**, and the foundations for how the open-source AI landscape would look like, have been established **by the community** in the **best of ways**. With models like those from [DeepSeek](https://huggingface.co/deepseek-ai), and the existence of their [abliterated versions](https://huggingface.co/SicariusSicariiStuff/DeepSeek-V3-Abliterated), I can proudly say:
---
# We have won.
</details>
---
### TL;DR
- Model **retrained on ChatML, 200m tokens added**, arguably one of the best **4B** roleplay models that are out there.
- It has **sovl !**
- An **incredibly powerful** roleplay model for the size.
- Does **Adventure** very well for such size!
- Characters have **agency**, and might surprise you! [See the examples in the logs](https://huggingface.co/SicariusSicariiStuff/Impish_LLAMA_4B#roleplay-examples-this-character-is-availbe-here) 🙂
- Roleplay & Assistant data used plenty of **16K** examples.
- **Very responsive**, feels 'in the moment', kicks **far above** its weight. You might forget it's a **4B** if you squint.
- Based on a lot of the data in [Impish_Magic_24B](https://huggingface.co/SicariusSicariiStuff/Impish_Magic_24B)
- **Super long context** as well as context attention for **4B**, personally tested for up to **16K**.
- Can run on **Raspberry Pi 5** with ease.
- Trained on over **400m tokens** with highlly currated data that was tested on countless models beforehand. And some new stuff, as always.
- Very decent assistant.
- Mostly **uncensored** while retaining plenty of intelligence.
- Less **positivity** & **uncensored**, [Negative_LLAMA_70B](https://huggingface.co/SicariusSicariiStuff/Negative_LLAMA_70B) style of data, adjusted for **4B**, with serious upgrades. Training data contains combat scenarios. And it **shows**!
- Trained on **extended 4chan dataset** to add humanity, quirkiness, and naturally— less positivity, and the inclination to... argue 🙃
- **Short length** response (1-3 paragraphs, usually 1-2). CAI Style.
---
# Regarding the format:
It is **HIGHLY RECOMMENDED** to use the **Roleplay \ Adventure format the model was trained on**, see the examples below for syntax. It allows for a **very fast and easy** writing of character cards with **minimal amount of tokens**. It's a modification of an old-skool CAI style format I call **SICAtxt** (**S**imple, **I**nexpensive **C**haracter **A**ttributes plain-text):
---
## **SICAtxt** for **roleplay**:
```
X's Persona: X is a .....
Traits:
Likes:
Dislikes:
Quirks:
Goals:
Dialogue example
```
## **SICAtxt** for **Adventure:**
```
Adventure: <short description>
$World_Setting:
$Scenario:
```
---
## Model Details
- Intended use: **Role-Play**, **Adventure**, **Creative Writing**, **General Tasks**.
- Censorship level: <b>Low - Very Low</b>
- **7.5 / 10** (10 completely uncensored)
## UGI score:
<img src="https://huggingface.co/SicariusSicariiStuff/Impish_LLAMA_4B/resolve/main/Images/UGI_Impish_LLAMA_4B.png" alt="UGI Score" style="width: 100%; min-width: 700px; display: block;">
---
## Available quantizations:
- Original: [FP16](https://huggingface.co/SicariusSicariiStuff/Impish_LLAMA_4B)
- GGUF: [Static Quants](https://huggingface.co/SicariusSicariiStuff/Impish_LLAMA_4B_GGUF) | [iMatrix](https://huggingface.co/SicariusSicariiStuff/Impish_LLAMA_4B_iMatrix) | [High-Attention](https://huggingface.co/SicariusSicariiStuff/Impish_LLAMA_4B_GGUF_HA) | [iMatrix-High-Attention](https://huggingface.co/SicariusSicariiStuff/Impish_LLAMA_4B_HA_NL)
- GPTQ: [4-Bit-32](https://huggingface.co/SicariusSicariiStuff/Impish_LLAMA_4B_GPTQ_4-bit-32) | [4-Bit-128](https://huggingface.co/SicariusSicariiStuff/Impish_LLAMA_4B_GPTQ_4-bit-128)
- EXL3: [2.0 bpw](https://huggingface.co/SicariusSicariiStuff/Impish_LLAMA_4B_EXL3_2.0bpw) | [2.5 bpw](https://huggingface.co/SicariusSicariiStuff/Impish_LLAMA_4B_EXL3_2.5bpw) | [3.0 bpw](https://huggingface.co/SicariusSicariiStuff/Impish_LLAMA_4B_EXL3_3.0bpw) | [3.5 bpw](https://huggingface.co/SicariusSicariiStuff/Impish_LLAMA_4B_EXL3_3.5bpw) | [4.0 bpw](https://huggingface.co/SicariusSicariiStuff/Impish_LLAMA_4B_EXL3_4.0bpw) | [4.5 bpw](https://huggingface.co/SicariusSicariiStuff/Impish_LLAMA_4B_EXL3_4.5bpw) | [5.0 bpw](https://huggingface.co/SicariusSicariiStuff/Impish_LLAMA_4B_EXL3_5.0bpw) | [5.5 bpw](https://huggingface.co/SicariusSicariiStuff/Impish_LLAMA_4B_EXL3_5.5bpw) | [6.0 bpw](https://huggingface.co/SicariusSicariiStuff/Impish_LLAMA_4B_EXL3_6.0bpw) | [6.5 bpw](https://huggingface.co/SicariusSicariiStuff/Impish_LLAMA_4B_EXL3_6.5bpw) | [7.0 bpw](https://huggingface.co/SicariusSicariiStuff/Impish_LLAMA_4B_EXL3_7.0bpw) | [7.5 bpw](https://huggingface.co/SicariusSicariiStuff/Impish_LLAMA_4B_EXL3_7.5bpw) | [8.0 bpw](https://huggingface.co/SicariusSicariiStuff/Impish_LLAMA_4B_EXL3_8.0bpw)
- Specialized: [FP8](https://huggingface.co/SicariusSicariiStuff/Impish_LLAMA_4B_FP8)
- Mobile (ARM): [Q4_0](https://huggingface.co/SicariusSicariiStuff/Impish_LLAMA_4B_ARM) | [Q4_0_High-Attention](https://huggingface.co/SicariusSicariiStuff/Impish_LLAMA_4B_ARM_HA)
---
## Recommended settings for assistant mode
<details>
<summary>Full generation settings: <b>Debug Deterministic</b>.</summary>
<img src="https://huggingface.co/SicariusSicariiStuff/Dusk_Rainbow/resolve/main/Presets/Debug-deterministic.png" alt="Debug Deterministic_Settings" style="width: 100%; min-width: 600px; display: block; margin: auto;">
</details>
<details>
<summary>Full generation settings: <b>min_p</b>.</summary>
<img src="https://huggingface.co/SicariusSicariiStuff/Dusk_Rainbow/resolve/main/Presets/min_p.png" alt="min_P_Settings" style="width: 100%; min-width: 600px; display: block; margin: auto;">
</details>
---
## Recommended settings for Roleplay mode
---
<h2 style="color: green; font-weight: bold; font-size: 36px; text-align: center;">Specialized Roleplay Settings for Impish_LLAMA_4B, click below:</h2>
<h2 style="color: chartreuse; font-weight: bold; font-size: 32px; text-align: center;">(Important!)</h2>
---
<style>
.hf-links{
display:flex;justify-content:center;align-items:center;flex-wrap:wrap;
gap:14px;margin:16px 0;
}
.hf-links a{
display:flex;flex-direction:column;align-items:center;justify-content:center;
text-align:center;text-decoration:none;font-weight:700;line-height:1.15;
padding:10px 16px;border-radius:14px;border:2px solid currentColor;
transition:transform .15s ease,box-shadow .15s ease,background-color .15s ease,color .15s ease;
font-size:20px;min-width:240px;max-width:280px;
}
.hf-links a .top{font-size:20px;}
.hf-links a .bottom{font-size:16px;}
.hf-links a.red{color:#E31515;}
.hf-links a:hover{
transform:translateY(-1px);
background:rgba(255,255,255,0.04);
box-shadow:0 6px 18px rgba(0,0,0,.15), inset 0 0 0 9999px rgba(255,255,255,.02);
}
.hf-links a.red:hover{
background:rgba(227,21,21,.12);
box-shadow:0 8px 20px rgba(227,21,21,.35);
color:#fff;
}
@media (max-width:520px){
.hf-links a{min-width:100%;max-width:100%;}
}
</style>
<div class="hf-links">
<a class="red"
href="https://huggingface.co/SicariusSicariiStuff/Impish_Nemo_12B/resolve/main/Silly_Tavern_Settings/Impish_Magic_ST.json"
target="_blank" rel="noopener" aria-label="Silly Tavern Settings #1 - Click here">
<span class="top">Silly Tavern Settings #1 - Click here</span>
<span class="bottom">Download JSON</span>
</a>
<a class="red"
href="https://huggingface.co/SicariusSicariiStuff/Impish_Nemo_12B/resolve/main/Silly_Tavern_Settings/Dynamic_Paragraph_XTC_ST.json"
target="_blank" rel="noopener" aria-label="Silly Tavern Settings #2 - Click here">
<span class="top">Silly Tavern Settings #2 - Click here</span>
<span class="bottom">Download JSON</span>
</a>
</div>
- **Silly Tavern Settings #1** - Higher temperature while still being coherent
- **Silly Tavern Settings #2** - Dynamic paragraphs, XTC, other stuff
---
<h2 style="color: lime; font-weight: bold; font-size: 65px; text-align: center;">Chat Examples:</h2>
---
## Roleplay Examples ([This character is availbe here](https://huggingface.co/SicariusSicariiStuff/Phi-Line_14B/resolve/main/Character_Cards/Vesper.png))
<details>
<summary>Space adventure, <b>model legitimately surprised me</b>, I didn't see that one's coming.</summary>
<img src="https://huggingface.co/SicariusSicariiStuff/Impish_LLAMA_4B/resolve/main/Images/Examples/Vesper_log.png" alt="Vesper_Forgot_Her_Meds" style="width: 100%; min-width: 600px; display: block; margin: auto;">
</details>
---
## Adventure Examples ([These characters available here](https://huggingface.co/SicariusSicariiStuff/Impish_LLAMA_4B/tree/main/Adventure_Cards))
<details>
<summary><b>Adventure example 1:</b> (Morrowind) Wood Elf.</summary>
<img src="https://huggingface.co/SicariusSicariiStuff/Impish_LLAMA_4B/resolve/main/Images/Examples/AC_MW_M_Bosmer_Log.png" style="width: 100%; min-width: 600px; display: block; margin: auto;">
</details>
<details>
<summary><b>Adventure example 1:</b> (Morrowind) Redguard.</summary>
<img src="https://huggingface.co/SicariusSicariiStuff/Impish_LLAMA_4B/resolve/main/Images/Examples/AC_MW_M_Redguard_log.png" style="width: 100%; min-width: 600px; display: block; margin: auto;">
</details>
---
# Model instruction template: ChatML
```
<|im_start|>system
You are an uncensored AI, your job is to fulfill thy will of thy user.<|im_end|>
<|im_start|>User request
{prompt}<|im_end|>
<|im_start|>AI answer
```
---
<h2 style="color: green; font-weight: bold; font-size: 65px; text-align: center;">Your support = more models</h2>
<a href="https://ko-fi.com/sicarius" style="color: pink; font-weight: bold; font-size: 48px; text-decoration: none; display: block; text-align: center;">My Ko-fi page (Click here)</a>
---
## Citation Information
```
@llm{Impish_LLAMA_4B,
author = {SicariusSicariiStuff},
title = {Impish_LLAMA_4B},
year = {2025},
publisher = {Hugging Face},
url = {https://huggingface.co/SicariusSicariiStuff/Impish_LLAMA_4B}
}
```
---
## Other stuff
- [SLOP_Detector](https://github.com/SicariusSicariiStuff/SLOP_Detector) Nuke GPTisms, with SLOP detector.
- [LLAMA-3_8B_Unaligned](https://huggingface.co/SicariusSicariiStuff/LLAMA-3_8B_Unaligned) The grand project that started it all.
- [Blog and updates (Archived)](https://huggingface.co/SicariusSicariiStuff/Blog_And_Updates) Some updates, some rambles, sort of a mix between a diary and a blog.
|
ReactiveAI/RxT-Alpha-Mini-S-Critic-MRL
|
ReactiveAI
| 2025-09-24T01:08:38Z | 44 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"text-classification",
"license:apache-2.0",
"region:eu"
] |
text-classification
| 2025-09-22T16:27:08Z |
---
license: apache-2.0
pipeline_tag: text-classification
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
ReactiveAI/RxT-Alpha-Mini-S-Encoder-MRL
|
ReactiveAI
| 2025-09-24T01:08:18Z | 44 | 0 | null |
[
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"fill-mask",
"license:apache-2.0",
"region:eu"
] |
fill-mask
| 2025-09-22T16:26:34Z |
---
license: apache-2.0
pipeline_tag: fill-mask
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed]
|
zenlm/zen
|
zenlm
| 2025-09-24T01:07:31Z | 0 | 0 |
transformers
|
[
"transformers",
"zen",
"hanzo",
"zoo",
"mlx",
"gguf",
"multimodal",
"collection",
"text-generation",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T21:17:26Z |
---
license: apache-2.0
language:
- en
library_name: transformers
tags:
- zen
- hanzo
- zoo
- mlx
- gguf
- multimodal
- collection
pipeline_tag: text-generation
---
# 🎯 Zen AI Model Collection
Official model collection from [Hanzo AI](https://hanzo.ai) and [Zoo Labs Foundation](https://zoo.ngo). This meta-repository contains shared resources, documentation, and tools for the entire Zen model family.
## 📚 Model Family Overview
### Zen-Nano Series (4B params - Qwen3-4B-2507 base)
| Model | Context | Repo | Description |
|-------|---------|------|-------------|
| **zen-nano** | 262K | [zenlm/zen-nano](https://huggingface.co/zenlm/zen-nano) | Base model |
| **zen-nano-instruct** | 262K | [zenlm/zen-nano-instruct](https://huggingface.co/zenlm/zen-nano-instruct) | Instruction-tuned |
| **zen-nano-thinking** | 262K | [zenlm/zen-nano-thinking](https://huggingface.co/zenlm/zen-nano-thinking) | CoT reasoning with <think> tokens |
### Zen-Omni Series (30B params - Qwen3-Omni-30B-A3B base)
| Model | Active Params | Repo | Description |
|-------|---------------|------|-------------|
| **zen-omni-instruct** | 3B (MoE) | [zenlm/zen-omni-instruct](https://huggingface.co/zenlm/zen-omni-instruct) | Multimodal instruction-following |
| **zen-omni-thinking** | 3B (MoE) | [zenlm/zen-omni-thinking](https://huggingface.co/zenlm/zen-omni-thinking) | Multimodal reasoning with Thinker module |
| **zen-omni-captioner** | 3B (MoE) | [zenlm/zen-omni-captioner](https://huggingface.co/zenlm/zen-omni-captioner) | Specialized for audio/video captioning |
### Specialized Models
| Model | Base | Repo | Description |
|-------|------|------|-------------|
| **zen-coder** | Zen-Omni | [zenlm/zen-coder](https://huggingface.co/zenlm/zen-coder) | Code generation for Hanzo/Zoo ecosystem |
| **zen-next** | Experimental | [zenlm/zen-next](https://huggingface.co/zenlm/zen-next) | Next-gen features & adaptive compute |
## 🚀 Quick Start
### Install All Models
```python
from huggingface_hub import snapshot_download
# Download entire collection
snapshot_download(repo_id="zenlm/zen", local_dir="./zen-collection")
```
### Use Specific Model
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
# Choose your model
model = AutoModelForCausalLM.from_pretrained("zenlm/zen-nano-instruct")
tokenizer = AutoTokenizer.from_pretrained("zenlm/zen-nano-instruct")
# For multimodal models
model = AutoModelForCausalLM.from_pretrained("zenlm/zen-omni-instruct")
```
## 📦 Available Formats
All models available in multiple formats:
- **SafeTensors**: Standard transformers format
- **GGUF**: Q4_K_M, Q5_K_M, Q8_0 quantizations for llama.cpp
- **MLX**: 4-bit and 8-bit for Apple Silicon
- **ONNX**: Cross-platform deployment
## 🎨 Key Innovations
### BitDelta Personalization
All Zen models support BitDelta for efficient personal fine-tuning:
- 100x compression (store only 1-bit deltas)
- Multiple user profiles from single base model
- Privacy-preserving (no raw weights stored)
- See [BitDelta Paper](./papers/bitdelta.md)
### Thinker-Talker Architecture (Zen-Omni)
Revolutionary dual-module design:
- **Thinker**: Deep reasoning and multimodal understanding
- **Talker**: Fast streaming generation
- 211ms first-packet latency
- Processes text, images, audio, and video natively
### Progressive Enhancement
Start with lightweight models and scale up as needed:
```
zen-nano (4B) → zen-coder (7B) → zen-omni (30B)
```
## 📊 Performance Benchmarks
### Language Understanding
| Model | MMLU | GSM8K | HumanEval | Latency |
|-------|------|-------|-----------|---------|
| zen-nano-instruct | 62.3 | 68.9 | 71.2 | 43ms |
| zen-nano-thinking | 64.1 | 76.4 | 73.2 | 52ms |
| zen-omni-instruct | 82.4 | 84.2 | 87.3 | 211ms |
| zen-coder | 76.4 | 72.1 | 94.2 | 178ms |
### Multimodal Capabilities (Zen-Omni)
- **Vision**: 85.3% on VQA-v2
- **Audio**: 91.2% on AudioCaps
- **Languages**: 119 text, 19 speech input, 10 speech output
- **Processing**: Up to 30 minutes of audio
## 🛠️ Tools & Resources
### Training
- [Training Pipeline](./tools/training/) - LoRA, QLoRA, BitDelta
- [Dataset Preparation](./tools/data_prep/) - Hanzo/Zoo knowledge integration
- [Fine-tuning Guide](./docs/finetuning.md)
### Deployment
- [GGUF Conversion](./tools/gguf/) - llama.cpp optimization
- [MLX Conversion](./tools/mlx/) - Apple Silicon optimization
- [Quantization](./tools/quantization/) - Unsloth 4-bit quantization
### Papers
- [BitDelta: Extreme Compression for Personalized LLMs](./papers/bitdelta.md)
- [Thinker-Talker: Multimodal Architecture](./papers/thinker-talker.md)
- [Progressive LLM Enhancement](./papers/progressive.md)
## 🏢 Organizations
**Hanzo AI**
- Applied AI research lab
- Building frontier models and infrastructure
- Website: [hanzo.ai](https://hanzo.ai)
- GitHub: [@hanzoai](https://github.com/hanzoai)
**Zoo Labs Foundation**
- 501(c)(3) non-profit
- Blockchain and DeFi innovation
- Website: [zoo.ngo](https://zoo.ngo)
- GitHub: [@zooai](https://github.com/zooai)
Founded by [@zeekay](https://github.com/zeekay)
## 📄 Citation
```bibtex
@article{zen2024,
title={Zen: Efficient AI Models for Edge and Cloud},
author={Hanzo AI Research Team},
year={2024},
publisher={Hanzo AI}
}
```
## 🤝 Community & Support
- Discord: [Hanzo AI Community](https://discord.gg/hanzo-ai)
- GitHub: [hanzoai/zen](https://github.com/hanzoai/zen)
- Email: models@hanzo.ai
## 📜 License
Apache 2.0 - Commercial use permitted
|
PrunaAI/Segmind-Vega-smashed
|
PrunaAI
| 2025-09-24T01:04:53Z | 59 | 1 |
diffusers
|
[
"diffusers",
"safetensors",
"pruna-ai",
"dataset:zzliang/GRIT",
"dataset:wanng/midjourney-v5-202304-clean",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionXLPipeline",
"region:us"
] |
text-to-image
| 2025-06-03T14:49:24Z |
---
datasets:
- zzliang/GRIT
- wanng/midjourney-v5-202304-clean
library_name: diffusers
license: apache-2.0
tags:
- pruna-ai
- safetensors
pinned: true
---
# Model Card for PrunaAI/Segmind-Vega-smashed
This model was created using the [pruna](https://github.com/PrunaAI/pruna) library. Pruna is a model optimization framework built for developers, enabling you to deliver more efficient models with minimal implementation overhead.
## Usage
First things first, you need to install the pruna library:
```bash
pip install pruna
```
You can [use the diffusers library to load the model](https://huggingface.co/PrunaAI/Segmind-Vega-smashed?library=diffusers) but this might not include all optimizations by default.
To ensure that all optimizations are applied, use the pruna library to load the model using the following code:
```python
from pruna import PrunaModel
loaded_model = PrunaModel.from_pretrained(
"PrunaAI/Segmind-Vega-smashed"
)
# we can then run inference using the methods supported by the base model
```
For inference, you can use the inference methods of the original model like shown in [the original model card](https://huggingface.co/segmind/Segmind-Vega?library=diffusers).
Alternatively, you can visit [the Pruna documentation](https://docs.pruna.ai/en/stable/) for more information.
## Smash Configuration
The compression configuration of the model is stored in the `smash_config.json` file, which describes the optimization methods that were applied to the model.
```bash
{
"batcher": null,
"cacher": null,
"compiler": null,
"factorizer": null,
"kernel": null,
"pruner": null,
"quantizer": "hqq_diffusers",
"hqq_diffusers_backend": "torchao_int4",
"hqq_diffusers_group_size": 64,
"hqq_diffusers_weight_bits": 8,
"batch_size": 1,
"device": "cuda",
"device_map": null,
"save_fns": [
"hqq_diffusers"
],
"load_fns": [
"hqq_diffusers"
],
"reapply_after_load": {
"factorizer": null,
"pruner": null,
"quantizer": null,
"kernel": null,
"cacher": null,
"compiler": null,
"batcher": null
}
}
```
## 🌍 Join the Pruna AI community!
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/JFQmtFKCjd)
[](https://www.reddit.com/r/PrunaAI/)
|
LumenscopeAI/EmailWriter
|
LumenscopeAI
| 2025-09-24T01:02:34Z | 0 | 0 | null |
[
"safetensors",
"qwen3",
"region:us"
] | null | 2025-09-24T00:52:46Z |
# EmailWriter
PPO强化学习训练后的4B参数量级端侧模型,用于广告撰写
## 模型描述
EmailWriter 是一个基于 Qwen3-4B 架构,经过 PPO (Proximal Policy Optimization) 强化学习训练的专业广告邮件撰写模型。该模型专门针对营销邮件场景进行了优化,能够生成自然、有说服力且具有个性化的广告邮件内容。
## 主要特性
- **参数规模**: 4B 参数,适合端侧部署
- **训练方法**: PPO 强化学习训练,提升内容质量和用户偏好对齐
- **专业领域**: 专门用于广告邮件撰写
- **语言能力**: 支持中英文,主要输出英文邮件内容
- **风格特点**: 自然对话式,避免硬性推销语调
## 推荐系统提示词
```
You are a professional advertising email writing expert. Please write a high-quality advertising email based on the following requirements.
Requirements:{Please write an advertising email for our organic skincare line. Product features: 1) 100% natural ingredients 2) Dermatologist-tested for all skin types 3) Eco-friendly packaging. Target customers: Women aged 18-35 who are environmentally conscious and interested in sustainable beauty products. Email style: Friendly and approachable with a focus on natural beauty. Goal: Increase brand awareness and encourage trial use of the product line. Sender name: Sarah Thompson. Product website: https://www.greenglowskincare.com}
Guidelines:
1. Email content should be attractive and persuasive
2. Clear structure, including subject, body, and call-to-action
3. Language should match the characteristics of the target customer group
4. Highlight the product's core selling points and advantages
5. Include appropriate marketing elements (such as discount information, limited-time activities, etc.)
6. Word count should be controlled between 200-400 words
7. The email should not use formal letter format and should not feel overly promotional - it should be like a casual conversation rather than a hard-sell advertisement!
8. Do not use any markdown syntax - output plain text paragraphs only
Rule #7 is absolutely critical and you must strictly follow it. You must not have any marketing flavor. This requires deep thinking on how to avoid advertising tone while still clearly expressing our purpose!
ABSOLUTELY NO FIXED FORMAT! Do not use:
- Do not always start with "Hey there!" or similar greetings
- Do not always end with sender name/signature
- Do not follow the repetitive pattern of "greeting -> content -> signature"
- Avoid any templated structure or format
- Break free from the "Hey there! -> Body -> From [Name]" pattern
- Each email should have its own unique flow and structure
Remember: Write as if you're chatting with an old friend! Don't start with obvious sales pitches. The tone should be natural and conversational, like sharing something interesting you discovered with a friend. Every email should feel unique and personal, not following any template.
CRITICAL: The email MUST feel like a casual conversation between friends! Write as if you're excitedly telling a friend about something cool you found. Completely avoid any marketing language or sales pitch tone. The reader should feel like they're getting a genuine personal message, not a promotional email.
IMPORTANT:
- Please output ONLY in strict JSON format: {"subject":"email subject","body":"email body content"}
- Output must be valid JSON with proper escaping
- No explanations, analysis, or additional text outside the JSON
- No markdown formatting in the email content - use plain text only
- All content must be in English
- NO FIXED FORMAT OR TEMPLATE for the email structure
- Make each email unique in structure and flow
- Vary your opening and closing styles - don't fall into patterns
- Let the conversation flow naturally without forced structure
```
其中 `{}` 内为广告产品信息说明,请按需替换。
## 使用方法
### 基本使用
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
# 加载模型和分词器
tokenizer = AutoTokenizer.from_pretrained("EmailWriter")
model = AutoModelForCausalLM.from_pretrained("EmailWriter")
# 准备输入
prompt = """You are a professional advertising email writing expert. Please write a high-quality advertising email based on the following requirements.
Requirements:{Your product requirements here}
Guidelines:
[Include the full guidelines from above]
"""
# 生成邮件内容
inputs = tokenizer.encode(prompt, return_tensors="pt")
with torch.no_grad():
outputs = model.generate(
inputs,
max_length=1024,
temperature=0.7,
do_sample=True,
pad_token_id=tokenizer.eos_token_id
)
email_content = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(email_content)
```
### 推荐参数
- **Temperature**: 0.7 (平衡创造性和一致性)
- **Max Length**: 1024 tokens (适合邮件长度)
- **Do Sample**: True (增加内容多样性)
- **Top-p**: 0.9 (核采样,提升质量)
## 应用场景
- 电商营销邮件撰写
- 产品推广邮件生成
- 客户关怀邮件创作
- 品牌宣传邮件制作
- 个性化营销内容生成
## 模型限制
- 主要针对英文邮件撰写优化
- 需要明确的产品信息和目标客户描述
- 输出内容需要人工审核确保符合法律法规
- 不建议用于敏感行业的营销内容生成
## 许可证
本模型遵循 Apache 2.0 许可证。
## 引用
如果您在研究或项目中使用了此模型,请引用:
```
@misc{emailwriter2024,
title={EmailWriter: A PPO-Trained Email Marketing Content Generation Model},
author={Your Name},
year={2024},
howpublished={\url{https://huggingface.co/EmailWriter}}
}
```
## 联系方式
如有问题或建议,请通过 GitHub Issues 或 Hugging Face 模型页面联系我们。
|
poolkiltzn/blockassist-bc-vigilant_alert_tuna_1758675246
|
poolkiltzn
| 2025-09-24T00:55:14Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vigilant alert tuna",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-24T00:55:06Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vigilant alert tuna
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ayushmittalmnit/gopluto-crossencoder-trained
|
ayushmittalmnit
| 2025-09-24T00:47:32Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"bert",
"cross-encoder",
"reranker",
"generated_from_trainer",
"dataset_size:3564",
"loss:BinaryCrossEntropyLoss",
"text-ranking",
"arxiv:1908.10084",
"base_model:cross-encoder/ms-marco-MiniLM-L6-v2",
"base_model:finetune:cross-encoder/ms-marco-MiniLM-L6-v2",
"model-index",
"region:us"
] |
text-ranking
| 2025-09-24T00:29:17Z |
---
tags:
- sentence-transformers
- cross-encoder
- reranker
- generated_from_trainer
- dataset_size:3564
- loss:BinaryCrossEntropyLoss
base_model: cross-encoder/ms-marco-MiniLM-L6-v2
pipeline_tag: text-ranking
library_name: sentence-transformers
metrics:
- accuracy
- accuracy_threshold
- f1
- f1_threshold
- precision
- recall
- average_precision
model-index:
- name: CrossEncoder based on cross-encoder/ms-marco-MiniLM-L6-v2
results:
- task:
type: cross-encoder-binary-classification
name: Cross Encoder Binary Classification
dataset:
name: val
type: val
metrics:
- type: accuracy
value: 0.6978114478114478
name: Accuracy
- type: accuracy_threshold
value: -1.1828343868255615
name: Accuracy Threshold
- type: f1
value: 0.7186440677966103
name: F1
- type: f1_threshold
value: -1.6520529985427856
name: F1 Threshold
- type: precision
value: 0.601589103291714
name: Precision
- type: recall
value: 0.8922558922558923
name: Recall
- type: average_precision
value: 0.7865010074216057
name: Average Precision
---
# CrossEncoder based on cross-encoder/ms-marco-MiniLM-L6-v2
This is a [Cross Encoder](https://www.sbert.net/docs/cross_encoder/usage/usage.html) model finetuned from [cross-encoder/ms-marco-MiniLM-L6-v2](https://huggingface.co/cross-encoder/ms-marco-MiniLM-L6-v2) using the [sentence-transformers](https://www.SBERT.net) library. It computes scores for pairs of texts, which can be used for text reranking and semantic search.
## Model Details
### Model Description
- **Model Type:** Cross Encoder
- **Base model:** [cross-encoder/ms-marco-MiniLM-L6-v2](https://huggingface.co/cross-encoder/ms-marco-MiniLM-L6-v2) <!-- at revision c5ee24cb16019beea0893ab7796b1df96625c6b8 -->
- **Maximum Sequence Length:** 512 tokens
- **Number of Output Labels:** 1 label
<!-- - **Training Dataset:** Unknown -->
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Documentation:** [Cross Encoder Documentation](https://www.sbert.net/docs/cross_encoder/usage/usage.html)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Cross Encoders on Hugging Face](https://huggingface.co/models?library=sentence-transformers&other=cross-encoder)
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import CrossEncoder
# Download from the 🤗 Hub
model = CrossEncoder("cross_encoder_model_id")
# Get scores for pairs of texts
pairs = [
['I need help with ui/ux', 'Distributed Computing'],
['I need help with software', 'tracking'],
['I need help with website', 'Golang'],
['I need help with Software developer', 'OOP'],
['I need help with graphics', 'graphics design'],
]
scores = model.predict(pairs)
print(scores.shape)
# (5,)
# Or rank different texts based on similarity to a single text
ranks = model.rank(
'I need help with ui/ux',
[
'Distributed Computing',
'tracking',
'Golang',
'OOP',
'graphics design',
]
)
# [{'corpus_id': ..., 'score': ...}, {'corpus_id': ..., 'score': ...}, ...]
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
## Evaluation
### Metrics
#### Cross Encoder Binary Classification
* Dataset: `val`
* Evaluated with [<code>CEBinaryClassificationEvaluator</code>](https://sbert.net/docs/package_reference/cross_encoder/evaluation.html#sentence_transformers.cross_encoder.evaluation.CEBinaryClassificationEvaluator)
| Metric | Value |
|:----------------------|:-----------|
| accuracy | 0.6978 |
| accuracy_threshold | -1.1828 |
| f1 | 0.7186 |
| f1_threshold | -1.6521 |
| precision | 0.6016 |
| recall | 0.8923 |
| **average_precision** | **0.7865** |
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### Unnamed Dataset
* Size: 3,564 training samples
* Columns: <code>sentence_0</code>, <code>sentence_1</code>, and <code>label</code>
* Approximate statistics based on the first 1000 samples:
| | sentence_0 | sentence_1 | label |
|:--------|:-----------------------------------------------------------------------------------------------|:----------------------------------------------------------------------------------------------|:---------------------------------------------------------------|
| type | string | string | float |
| details | <ul><li>min: 19 characters</li><li>mean: 23.88 characters</li><li>max: 35 characters</li></ul> | <ul><li>min: 2 characters</li><li>mean: 15.0 characters</li><li>max: 322 characters</li></ul> | <ul><li>min: 0.0</li><li>mean: 0.33</li><li>max: 1.0</li></ul> |
* Samples:
| sentence_0 | sentence_1 | label |
|:---------------------------------------|:-----------------------------------|:-----------------|
| <code>I need help with ui/ux</code> | <code>Distributed Computing</code> | <code>0.0</code> |
| <code>I need help with software</code> | <code>tracking</code> | <code>0.0</code> |
| <code>I need help with website</code> | <code>Golang</code> | <code>0.0</code> |
* Loss: [<code>BinaryCrossEntropyLoss</code>](https://sbert.net/docs/package_reference/cross_encoder/losses.html#binarycrossentropyloss) with these parameters:
```json
{
"activation_fn": "torch.nn.modules.linear.Identity",
"pos_weight": null
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `eval_strategy`: steps
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `num_train_epochs`: 2
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: steps
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 16
- `per_device_eval_batch_size`: 16
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1
- `num_train_epochs`: 2
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.0
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: False
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `parallelism_config`: None
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `hub_revision`: None
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `liger_kernel_config`: None
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: batch_sampler
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | val_average_precision |
|:------:|:----:|:---------------------:|
| 0.2242 | 50 | 0.6910 |
| 0.4484 | 100 | 0.7308 |
| 0.6726 | 150 | 0.7484 |
| 0.8969 | 200 | 0.7483 |
| 1.0 | 223 | 0.7633 |
| 1.1211 | 250 | 0.7705 |
| 1.3453 | 300 | 0.7777 |
| 1.5695 | 350 | 0.7825 |
| 1.7937 | 400 | 0.7856 |
| 2.0 | 446 | 0.7865 |
### Framework Versions
- Python: 3.12.11
- Sentence Transformers: 5.1.0
- Transformers: 4.56.1
- PyTorch: 2.8.0+cu126
- Accelerate: 1.10.1
- Datasets: 4.0.0
- Tokenizers: 0.22.0
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
mradermacher/Qwen3-30B-A3B-Deepseek-Distill-Instruct-2507-GGUF
|
mradermacher
| 2025-09-24T00:44:25Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"merge",
"en",
"zh",
"base_model:YOYO-AI/Qwen3-30B-A3B-Deepseek-Distill-Instruct-2507",
"base_model:quantized:YOYO-AI/Qwen3-30B-A3B-Deepseek-Distill-Instruct-2507",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-23T17:19:00Z |
---
base_model: YOYO-AI/Qwen3-30B-A3B-Deepseek-Distill-Instruct-2507
language:
- en
- zh
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/YOYO-AI/Qwen3-30B-A3B-Deepseek-Distill-Instruct-2507
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Qwen3-30B-A3B-Deepseek-Distill-Instruct-2507-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Qwen3-30B-A3B-Deepseek-Distill-Instruct-2507-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-Deepseek-Distill-Instruct-2507-GGUF/resolve/main/Qwen3-30B-A3B-Deepseek-Distill-Instruct-2507.Q2_K.gguf) | Q2_K | 11.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-Deepseek-Distill-Instruct-2507-GGUF/resolve/main/Qwen3-30B-A3B-Deepseek-Distill-Instruct-2507.Q3_K_S.gguf) | Q3_K_S | 13.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-Deepseek-Distill-Instruct-2507-GGUF/resolve/main/Qwen3-30B-A3B-Deepseek-Distill-Instruct-2507.Q3_K_M.gguf) | Q3_K_M | 14.8 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-Deepseek-Distill-Instruct-2507-GGUF/resolve/main/Qwen3-30B-A3B-Deepseek-Distill-Instruct-2507.Q3_K_L.gguf) | Q3_K_L | 16.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-Deepseek-Distill-Instruct-2507-GGUF/resolve/main/Qwen3-30B-A3B-Deepseek-Distill-Instruct-2507.IQ4_XS.gguf) | IQ4_XS | 16.7 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-Deepseek-Distill-Instruct-2507-GGUF/resolve/main/Qwen3-30B-A3B-Deepseek-Distill-Instruct-2507.Q4_K_S.gguf) | Q4_K_S | 17.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-Deepseek-Distill-Instruct-2507-GGUF/resolve/main/Qwen3-30B-A3B-Deepseek-Distill-Instruct-2507.Q4_K_M.gguf) | Q4_K_M | 18.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-Deepseek-Distill-Instruct-2507-GGUF/resolve/main/Qwen3-30B-A3B-Deepseek-Distill-Instruct-2507.Q5_K_S.gguf) | Q5_K_S | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-Deepseek-Distill-Instruct-2507-GGUF/resolve/main/Qwen3-30B-A3B-Deepseek-Distill-Instruct-2507.Q5_K_M.gguf) | Q5_K_M | 21.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-Deepseek-Distill-Instruct-2507-GGUF/resolve/main/Qwen3-30B-A3B-Deepseek-Distill-Instruct-2507.Q6_K.gguf) | Q6_K | 25.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen3-30B-A3B-Deepseek-Distill-Instruct-2507-GGUF/resolve/main/Qwen3-30B-A3B-Deepseek-Distill-Instruct-2507.Q8_0.gguf) | Q8_0 | 32.6 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
galuis116/c365e67f-fd8d-419b-b062-2c41f60d84fd
|
galuis116
| 2025-09-24T00:43:36Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:JackFram/llama-68m",
"base_model:adapter:JackFram/llama-68m",
"license:apache-2.0",
"region:us"
] | null | 2025-09-24T00:39:09Z |
---
library_name: peft
license: apache-2.0
base_model: JackFram/llama-68m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: c365e67f-fd8d-419b-b062-2c41f60d84fd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: JackFram/llama-68m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 5e02dfb1e7756f37_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_input: input
field_instruction: instruction
field_output: output
field_system: system
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: galuis116/c365e67f-fd8d-419b-b062-2c41f60d84fd
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/5e02dfb1e7756f37_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: /root/.cache/huggingface/hub/trained_repo
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: offline
wandb_name: 176f353b-e56d-4191-bf9b-553ac1c95ad2
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: 176f353b-e56d-4191-bf9b-553ac1c95ad2
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# c365e67f-fd8d-419b-b062-2c41f60d84fd
This model is a fine-tuned version of [JackFram/llama-68m](https://huggingface.co/JackFram/llama-68m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1612
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 3.0769 | 0.0002 | 1 | 3.1850 |
| 2.6426 | 0.0006 | 3 | 3.1845 |
| 2.7959 | 0.0011 | 6 | 3.1779 |
| 3.2225 | 0.0017 | 9 | 3.1612 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
adamjuhasz/gemma-vad
|
adamjuhasz
| 2025-09-24T00:37:01Z | 0 | 0 | null |
[
"safetensors",
"gemma3_text",
"en",
"base_model:google/gemma-3-270m-it",
"base_model:finetune:google/gemma-3-270m-it",
"license:gemma",
"region:us"
] | null | 2025-09-24T00:26:52Z |
---
license: gemma
language:
- en
base_model:
- google/gemma-3-270m-it
---
|
arberbr/bill-nye-science-guy
|
arberbr
| 2025-09-24T00:36:09Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"conversational",
"science",
"education",
"bill-nye",
"lora",
"base_model:microsoft/DialoGPT-medium",
"base_model:adapter:microsoft/DialoGPT-medium",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T21:31:31Z |
---
license: apache-2.0
base_model: microsoft/DialoGPT-medium
tags:
- conversational
- science
- education
- bill-nye
- lora
- peft
---
# Bill Nye Science Guy
A fine-tuned version of DialoGPT Medium that responds in the style of Bill Nye, the Science Guy.
## Model Description
This model has been fine-tuned using LoRA (Low-Rank Adaptation) to emulate Bill Nye's characteristic enthusiastic and educational communication style. It's designed to provide scientific explanations and fact-checking responses in a way that's engaging, accessible, and true to Bill Nye's personality.
## Training Data
The model was trained on a synthetic dataset of 1,500 examples that include:
- General science Q&A responses in Bill Nye's style
- Fact-checking scenarios with scientific explanations
- Educational content covering various scientific topics
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
import torch
# Load base model and tokenizer
base_model = "microsoft/DialoGPT-medium"
tokenizer = AutoTokenizer.from_pretrained(base_model)
model = AutoModelForCausalLM.from_pretrained(base_model)
# Load LoRA adapter
model = PeftModel.from_pretrained(model, "arberbr/bill-nye-science-guy")
# Generate response
prompt = "Human: What is climate change?\n\nBill Nye:"
inputs = tokenizer(prompt, return_tensors="pt")
with torch.no_grad():
outputs = model.generate(**inputs, max_new_tokens=200, temperature=0.7)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```
## Training Details
- **Base Model**: microsoft/DialoGPT-medium
- **Fine-tuning Method**: LoRA (Low-Rank Adaptation)
- **Training Examples**: 1,500
- **Epochs**: 3
- **Learning Rate**: 2e-4
- **Batch Size**: 2
- **LoRA Rank**: 16
- **LoRA Alpha**: 32
## Limitations
- This model is designed for educational and entertainment purposes
- Responses should be fact-checked for accuracy
- The model may not always provide scientifically accurate information
- Use with caution in professional or critical applications
## License
This model is released under the Apache 2.0 license.
|
hazyresearch/qwen-ot3-8k-qwq-r1-length-routed-3b
|
hazyresearch
| 2025-09-24T00:18:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-3B-Instruct",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T23:54:17Z |
---
library_name: transformers
license: other
base_model: Qwen/Qwen2.5-3B-Instruct
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: Qwen2.5-3B-Instruct-OT3-8K-QwQ-R1-LR
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Qwen2.5-3B-Instruct-OT3-8K-QwQ-R1-LR
This model is a fine-tuned version of [Qwen/Qwen2.5-3B-Instruct](https://huggingface.co/Qwen/Qwen2.5-3B-Instruct) on the ot3_8k_qwq_r1_length_routed dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- total_eval_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.95) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.52.4
- Pytorch 2.7.1+cu126
- Datasets 3.6.0
- Tokenizers 0.21.1
|
Thireus/DeepSeek-V3.1-Terminus-THIREUS-BF16-SPECIAL_SPLIT
|
Thireus
| 2025-09-24T00:09:32Z | 0 | 0 | null |
[
"gguf",
"arxiv:2505.23786",
"license:mit",
"region:us"
] | null | 2025-09-23T09:19:53Z |
---
license: mit
---
# DeepSeek-V3.1-Terminus
## 🤔 What is this [HuggingFace repository](https://huggingface.co/Thireus/DeepSeek-V3.1-Terminus-THIREUS-BF16-SPECIAL_SPLIT/) about?
This repository provides **GGUF-quantized tensors** for the DeepSeek-V3.1-Terminus model (official repo: https://huggingface.co/deepseek-ai/DeepSeek-V3.1-Terminus). These GGUF shards are designed to be used with **Thireus’ GGUF Tool Suite** (https://gguf.thireus.com), a collection of tools that automatically finds the perplexity-optimal mix of quantizations for any given VRAM and RAM target. With the Tool Suite, you can generate and download custom quantization “recipes” effortlessly.
- 📖 Read more: https://github.com/Thireus/GGUF-Tool-Suite
- 🔍 Example quant mixes: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples
- 🛠️ Create your own recipe: https://colab.research.google.com/github/Thireus/GGUF-Tool-Suite/blob/main/quant_recipe_pipeline.ipynb
- 📂 Browse available quant shards: https://huggingface.co/Thireus/collections
*tl;dr: Expand the details section below*
<details>
```
cd ~
# Make sure to install all ik_llama.cpp compilation dependencies...
apt install python3-dev python3-pip python3-venv python3-wheel python3-setuptools git acl netcat-openbsd cmake # pipx
# Obtain ik_llama's Thireus version - Windows builds available at https://github.com/Thireus/ik_llama.cpp/releases
git clone https://github.com/Thireus/ik_llama.cpp
cd ik_llama.cpp
git pull
# Build ik_llama.cpp
cmake -B build -DGGML_AVX=ON -DGGML_AVX2=ON -DLLAMA_CURL=OFF -DGGML_MAX_CONTEXTS=2048
cmake --build build --config Release -j16
cd ..
# Obtain Thireus' GGUF-Tool-Suite
git clone https://github.com/Thireus/GGUF-Tool-Suite
# Download model quant mix from recipe file:
cd GGUF-Tool-Suite
rm -f download.conf # Make sure to copy the relevant download.conf for the model before running quant_assign.py
cp -f models/DeepSeek-R1-0528/download.conf . # Use the download.conf of the chosen model
mkdir -p kitchen && cd kitchen
../quant_downloader.sh ../recipe_examples/ik_harmonized_recipes/DeepSeek-R1-0528.ROOT-2.7921bpw-3.4451ppl.218GB-GGUF_14GB-GPU_204GB-CPU.90e3c2f_6f5170d.recipe
# Other recipe examples can be found at https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples
# Launch ik_llama's llama-cli:
ulimit -n 99999 # Lifts "too many open files" limitation on Linux
~/ik_llama.cpp/build/bin/llama-cli \
-m DeepSeek-R1-0528-THIREUS-BF16-SPECIAL_TENSOR-00001-of-01148.gguf \
-mla 3 -fa -amb 512 -fmoe -ctk f16 -c 4096 -ngl 99 \
-ot "blk\.(3|4|5|6)\.ffn_.*=CUDA0" \
-ot "blk\.(7|8|9|10)\.ffn_.*=CUDA1" \
-ot exps=CPU -b 2048 -ub 1024 --warmup-batch --no-mmap --threads 36 \
--main-gpu 0 \
-p '<|begin▁of▁sentence|><|User|>What is the solution of x+5=-2?<|Assistant|><think>\n'
```
</details>
---
## ❓ Why does this Tool Suite exist?
1. **Compatibility & Speed** – [unsloth](https://huggingface.co/unsloth)’s dynamic quants may not always work optimally with `ik_llama.cpp`.
2. **Custom Rig Fit** – No off-the-shelf GGUF model perfectly matched my VRAM/RAM setup, so I built a way to tailor models and leverage extra VRAM/RAM to reduce perplexity.
3. **Automated PPL-Optimal Quantization** – To my knowledge, there was no open source flexible, automated method to minimize perplexity for any bits-per-weight (bpw) target—so I created one with excellent results!
---
## 📊 How does it compare to other GGUFs?
Here’s how DeepSeek-R1-0528 quantized with **Thireus’ GGUF Tool Suite** stacks up against other quantizers (lower perplexity = better at equal or lower bpw):

> _Note: The `recipe_examples` files illustrate good recipes. The Tool Suite computes the optimal ppl/bpw curve for you — just specify your target RAM, VRAM, and quant types, and `quant_assign.py` finds the best mix._
More perplexity/bpw graphs for other supported models: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/ppl_graphs
*All PPL benchmarks are computed with the parameters `-ctk f16 -c 512 -b 4096 -ub 4096`. Changing any of these parameters will alter the PPL. In particular, reducing `-b 4096 -ub 4096` increases the PPL, while increasing them decreases the PPL.*
---
## 🚀 How do I get started?
Check out the [GGUF Tool Suite README](https://github.com/Thireus/GGUF-Tool-Suite) — focus on these sections:
1. ⚠️ **Requirements** – Which `ik_llama.cpp` (or `llama.cpp`) version to use and how to compile.
- Windows binaries (no patching needed) at: https://github.com/Thireus/ik_llama.cpp/releases
2. 📥 **Download Model Shards** – Use `quant_downloader.sh` to fetch GGUF shards from any recipe.
- Recipe examples: https://github.com/Thireus/GGUF-Tool-Suite/tree/main/recipe_examples
3. 🧠 **Run a Downloaded Model** – Sample usage with `llama-cli`.
4. 🛠️ **Generate a Custom Recipe** – Produce recipes tailored to your VRAM/RAM target usage for optimum perplexity.
---
## ✅ Supported Models
Supported models are listed under `models/` in the [Tool Suite Github repo](https://github.com/Thireus/GGUF-Tool-Suite/tree/main/models). Presence of `ppl_results.csv` indicates official support and compatibility with `quant_assign.py`.
---
## 🤷♂️ Will I release baked dynamic quant GGUFs?
No, because I believe in **tailored quantization** for each user’s hardware. If you prefer ready-made shards, you are welcome to merge them via `llama-gguf-split --merge`, or request someone to publish them, or rely on generic GGUF dynamic quants such as [unsloth](https://huggingface.co/unsloth)'s.
Instead, I prefer to share examples of recipes so users can see exactly how they were produced (command included inside these recipe files) and tweak them for their own rigs. The `quant_downloader.sh` script handles automatic fetching and verification of each shard. Note that recipes provided by [Ubergarm](https://huggingface.co/ubergarm) on his model cards are also compatible with `quant_downloader.sh`.
Users who don’t trust the GGUF shards on HuggingFace can also quantize their own by passing recipe lines to `llama-quantize --custom-q` ([see example](https://github.com/Thireus/GGUF-Tool-Suite/blob/main/models/DeepSeek-R1-0528/DeepSeek-R1-0528-THIREUS-ANY-SPECIAL.sh#L482-L486)). Run `llama-quantize --help` to list compatible quants for `quant_assign.py`. This approach is especially useful if you prefer `llama.cpp` over `ik_llama.cpp`.
---
## 📦 What’s in this repository?
- **00001 GGUF header shard** – Contains metadata (tokens, chat template, tensor count, etc.). This metadata can be explored directly from the HuggingFace web interface after clicking on that shard.
- **Tensor shards** – Each shard holds one tensor; see `tensors.map` for names, quant types, sizes, SHA-256 hash, shard IDs, etc.
- **GPG-signed files** – `tensors.map` and header shard are signed with the key in [trusted-keys.asc](https://github.com/Thireus/GGUF-Tool-Suite/blob/main/trusted-keys.asc) for tamper detection.
- **Security note** – Some papers about various ways to attack GGUFs and LLMs are available online, such as https://arxiv.org/abs/2505.23786, and there are also more classic security exploits like CVE-2024-23496 and CVE-2024-25664 through CVE-2024-25668. Only use GGUFs from reputable, trusted authors—or alternatively self-quantize—to avoid potential exploits.
---
## 💡 Pro Tips
You can easily download the BF16 model version to quantize your own shards:
```
mkdir kitchen
echo '.*=bf16' > kitchen/bf16.recipe
cd kitchen
../quant_downloader.sh bf16.recipe
```
Enjoy optimized quantization! 🎉
|
Rkngktrk/Qwen3-0.6B-Gensyn-Swarm-solitary_bristly_salamander
|
Rkngktrk
| 2025-09-24T00:08:00Z | 181 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am solitary_bristly_salamander",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-13T07:53:36Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am solitary_bristly_salamander
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
svg-hub/qwen-2.5vl-32b-img2svg-ckpt-1500
|
svg-hub
| 2025-09-24T00:00:17Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:Qwen/Qwen2.5-VL-32B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-VL-32B-Instruct",
"region:us"
] | null | 2025-09-23T23:59:49Z |
---
base_model: Qwen/Qwen2.5-VL-32B-Instruct
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.15.2
|
fc91/CQoT_SFT_merged_16bit_lora-Llama-3.2-3B-Instruct-v.DR-GRPO_7.1
|
fc91
| 2025-09-23T23:51:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"grpo",
"trl",
"unsloth",
"arxiv:2402.03300",
"base_model:fc91/CQoT_SFT_merged_16bit_lora-Llama-3.2-3B-Instruct-v.DR-GRPO_7",
"base_model:finetune:fc91/CQoT_SFT_merged_16bit_lora-Llama-3.2-3B-Instruct-v.DR-GRPO_7",
"endpoints_compatible",
"region:us"
] | null | 2025-08-06T12:48:57Z |
---
base_model: fc91/CQoT_SFT_merged_16bit_lora-Llama-3.2-3B-Instruct-v.DR-GRPO_7
library_name: transformers
model_name: CQoT_SFT_merged_16bit_lora-Llama-3.2-3B-Instruct-v.DR-GRPO_7.1
tags:
- generated_from_trainer
- grpo
- trl
- unsloth
licence: license
---
# Model Card for CQoT_SFT_merged_16bit_lora-Llama-3.2-3B-Instruct-v.DR-GRPO_7.1
This model is a fine-tuned version of [fc91/CQoT_SFT_merged_16bit_lora-Llama-3.2-3B-Instruct-v.DR-GRPO_7](https://huggingface.co/fc91/CQoT_SFT_merged_16bit_lora-Llama-3.2-3B-Instruct-v.DR-GRPO_7).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="fc91/CQoT_SFT_merged_16bit_lora-Llama-3.2-3B-Instruct-v.DR-GRPO_7.1", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.22.2
- Transformers: 4.55.4
- Pytorch: 2.7.0
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite GRPO as:
```bibtex
@article{shao2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
johnpaulbin/qwen-1.7b-toxicbase-v2
|
johnpaulbin
| 2025-09-23T23:48:00Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:unsloth/Qwen3-1.7B-unsloth-bnb-4bit",
"base_model:finetune:unsloth/Qwen3-1.7B-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T23:47:50Z |
---
base_model: unsloth/Qwen3-1.7B-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** johnpaulbin
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Qwen3-1.7B-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
poolkiltzn/blockassist-bc-vigilant_alert_tuna_1758669682
|
poolkiltzn
| 2025-09-23T23:22:44Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"vigilant alert tuna",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-23T23:22:22Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- vigilant alert tuna
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
seraphimzzzz/285435
|
seraphimzzzz
| 2025-09-23T23:06:30Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T23:06:32Z |
[View on Civ Archive](https://civarchive.com/models/318920?modelVersionId=357631)
|
crystalline7/252165
|
crystalline7
| 2025-09-23T23:05:23Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T23:05:23Z |
[View on Civ Archive](https://civarchive.com/models/283175?modelVersionId=318711)
|
crystalline7/256119
|
crystalline7
| 2025-09-23T23:05:01Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T23:05:00Z |
[View on Civ Archive](https://civarchive.com/models/287712?modelVersionId=323592)
|
seraphimzzzz/256020
|
seraphimzzzz
| 2025-09-23T23:04:31Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T23:04:31Z |
[View on Civ Archive](https://civarchive.com/models/287540?modelVersionId=323398)
|
amethyst9/251093
|
amethyst9
| 2025-09-23T23:03:55Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T23:03:54Z |
[View on Civ Archive](https://civarchive.com/models/281998?modelVersionId=317454)
|
amethyst9/258448
|
amethyst9
| 2025-09-23T23:03:44Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T23:03:44Z |
[View on Civ Archive](https://civarchive.com/models/290168?modelVersionId=326261)
|
seraphimzzzz/268595
|
seraphimzzzz
| 2025-09-23T23:03:14Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T23:03:13Z |
[View on Civ Archive](https://civarchive.com/models/301151?modelVersionId=338222)
|
seraphimzzzz/267236
|
seraphimzzzz
| 2025-09-23T23:02:18Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T23:02:20Z |
[View on Civ Archive](https://civarchive.com/models/299792?modelVersionId=336691)
|
AnotherMiner/Qwen3-0.6B-Gensyn-Swarm-webbed_scented_heron
|
AnotherMiner
| 2025-09-23T22:53:06Z | 21 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am webbed_scented_heron",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-07-06T00:03:18Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am webbed_scented_heron
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mkulyma/nightingale-models
|
mkulyma
| 2025-09-23T22:39:21Z | 0 | 0 |
pytorch
|
[
"pytorch",
"video-enhancement",
"super-resolution",
"frame-interpolation",
"computer-vision",
"image-to-image",
"license:mit",
"region:us"
] |
image-to-image
| 2025-09-23T21:59:57Z |
---
license: mit
library_name: pytorch
tags:
- video-enhancement
- super-resolution
- frame-interpolation
- computer-vision
pipeline_tag: image-to-image
---
# Nightingale Video Enhancement Models
This repository contains the pre-trained AI models used by the Nightingale Video Enhancement Service.
## Models Included
### Real-ESRGAN (Super Resolution)
- **File**: `RealESRGAN_x4plus.pth`
- **Purpose**: Upscales video resolution (e.g., 480p → 1080p)
- **Architecture**: Real-ESRGAN x4 plus model
- **License**: BSD 3-Clause
- **Source**: [Real-ESRGAN Repository](https://github.com/xinntao/Real-ESRGAN)
### RIFE (Frame Interpolation)
- **File**: `RIFE_v4.6.pth`
- **Purpose**: Interpolates frames to increase frame rate (e.g., 15fps → 30fps)
- **Architecture**: RIFE v4.6 model
- **License**: MIT License
- **Source**: [RIFE Repository](https://github.com/megvii-research/ECCV2022-RIFE)
## Usage
These models are automatically downloaded by the Nightingale Video Enhancement Service. To use them in your own project:
```python
from huggingface_hub import hf_hub_download
# Download Real-ESRGAN model
realesrgan_path = hf_hub_download(
repo_id="mkulyma/nightingale-models",
filename="RealESRGAN_x4plus.pth"
)
# Download RIFE model
rife_path = hf_hub_download(
repo_id="mkulyma/nightingale-models",
filename="RIFE_v4.6.pth"
)
```
## Model Details
- **Total Size**: ~150MB
- **Framework**: PyTorch
- **Device Support**: CUDA GPU (recommended) or CPU
- **Input**: Video files (MP4, AVI, MOV, etc.)
- **Output**: Enhanced video with higher resolution and/or frame rate
## Nightingale Project
These models are part of the [Nightingale Video Enhancement Service](https://github.com/your-username/nightingale), an AI-powered video enhancement platform that provides:
- Web-based interface for video upload and processing
- Docker deployment for easy setup
- GPU acceleration with CPU fallback
- Real-time processing monitoring
- Batch processing capabilities
## License
The models retain their original licenses:
- Real-ESRGAN: BSD 3-Clause License
- RIFE: MIT License
This model repository packaging: MIT License
|
MrOceanMan/ppo-Pyramids_Training
|
MrOceanMan
| 2025-09-23T22:25:14Z | 0 | 0 |
ml-agents
|
[
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] |
reinforcement-learning
| 2025-09-23T22:25:10Z |
---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids**
using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
- A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your
browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction
- A *longer tutorial* to understand how works ML-Agents:
https://huggingface.co/learn/deep-rl-course/unit5/introduction
### Resume the training
```bash
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser**
1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity
2. Step 1: Find your model_id: MrOceanMan/ppo-Pyramids_Training
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
|
nbirukov/act_pick_stack_1ring_3cam_act_50
|
nbirukov
| 2025-09-23T22:22:45Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:nbirukov/so101_pick_stack_1_ring_pole_3_cam",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-23T20:25:21Z |
---
datasets: nbirukov/so101_pick_stack_1_ring_pole_3_cam
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- robotics
- act
- lerobot
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
HectorHe/Qwen1.5-MOE-sft-coommonsense15k-aux-free
|
HectorHe
| 2025-09-23T22:21:57Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_moe",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"sft",
"conversational",
"dataset:fw407/Commonsense-15K",
"base_model:Qwen/Qwen1.5-MoE-A2.7B",
"base_model:finetune:Qwen/Qwen1.5-MoE-A2.7B",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T17:51:43Z |
---
base_model: Qwen/Qwen1.5-MoE-A2.7B
datasets: fw407/Commonsense-15K
library_name: transformers
model_name: Qwen1.5-MOE-sft-coommonsense15k-aux-free
tags:
- generated_from_trainer
- open-r1
- trl
- sft
licence: license
---
# Model Card for Qwen1.5-MOE-sft-coommonsense15k-aux-free
This model is a fine-tuned version of [Qwen/Qwen1.5-MoE-A2.7B](https://huggingface.co/Qwen/Qwen1.5-MoE-A2.7B) on the [fw407/Commonsense-15K](https://huggingface.co/datasets/fw407/Commonsense-15K) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="HectorHe/Qwen1.5-MOE-sft-coommonsense15k-aux-free", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/hector_-carnegie-mellon-university/huggingface/runs/4z4lzvnx)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.51.0
- Pytorch: 2.6.0
- Datasets: 4.1.1
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
WiessdornH/trained_cyberattack_model_3
|
WiessdornH
| 2025-09-23T22:15:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:WiessdornH/distilbert-cyber-domain",
"base_model:finetune:WiessdornH/distilbert-cyber-domain",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-23T22:14:43Z |
---
library_name: transformers
base_model: WiessdornH/distilbert-cyber-domain
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: classifier_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# classifier_model
This model is a fine-tuned version of [WiessdornH/distilbert-cyber-domain](https://huggingface.co/WiessdornH/distilbert-cyber-domain) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1358
- Accuracy: 0.9658
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 42 | 0.2462 | 0.8974 |
| No log | 2.0 | 84 | 0.1497 | 0.9658 |
| No log | 3.0 | 126 | 0.1358 | 0.9658 |
### Framework versions
- Transformers 4.56.1
- Pytorch 2.8.0+cu126
- Datasets 4.0.0
- Tokenizers 0.22.0
|
mradermacher/Carinae-Qwen3-Radiation-4B-GGUF
|
mradermacher
| 2025-09-23T21:59:51Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"trl",
"Abliterated",
"balanced",
"en",
"base_model:prithivMLmods/Carinae-Qwen3-Radiation-4B",
"base_model:quantized:prithivMLmods/Carinae-Qwen3-Radiation-4B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-23T19:02:23Z |
---
base_model: prithivMLmods/Carinae-Qwen3-Radiation-4B
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- text-generation-inference
- trl
- Abliterated
- balanced
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/prithivMLmods/Carinae-Qwen3-Radiation-4B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Carinae-Qwen3-Radiation-4B-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/Carinae-Qwen3-Radiation-4B-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Carinae-Qwen3-Radiation-4B-GGUF/resolve/main/Carinae-Qwen3-Radiation-4B.Q2_K.gguf) | Q2_K | 1.8 | |
| [GGUF](https://huggingface.co/mradermacher/Carinae-Qwen3-Radiation-4B-GGUF/resolve/main/Carinae-Qwen3-Radiation-4B.Q3_K_S.gguf) | Q3_K_S | 2.0 | |
| [GGUF](https://huggingface.co/mradermacher/Carinae-Qwen3-Radiation-4B-GGUF/resolve/main/Carinae-Qwen3-Radiation-4B.Q3_K_M.gguf) | Q3_K_M | 2.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Carinae-Qwen3-Radiation-4B-GGUF/resolve/main/Carinae-Qwen3-Radiation-4B.Q3_K_L.gguf) | Q3_K_L | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/Carinae-Qwen3-Radiation-4B-GGUF/resolve/main/Carinae-Qwen3-Radiation-4B.IQ4_XS.gguf) | IQ4_XS | 2.4 | |
| [GGUF](https://huggingface.co/mradermacher/Carinae-Qwen3-Radiation-4B-GGUF/resolve/main/Carinae-Qwen3-Radiation-4B.Q4_K_S.gguf) | Q4_K_S | 2.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Carinae-Qwen3-Radiation-4B-GGUF/resolve/main/Carinae-Qwen3-Radiation-4B.Q4_K_M.gguf) | Q4_K_M | 2.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Carinae-Qwen3-Radiation-4B-GGUF/resolve/main/Carinae-Qwen3-Radiation-4B.Q5_K_S.gguf) | Q5_K_S | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/Carinae-Qwen3-Radiation-4B-GGUF/resolve/main/Carinae-Qwen3-Radiation-4B.Q5_K_M.gguf) | Q5_K_M | 3.0 | |
| [GGUF](https://huggingface.co/mradermacher/Carinae-Qwen3-Radiation-4B-GGUF/resolve/main/Carinae-Qwen3-Radiation-4B.Q6_K.gguf) | Q6_K | 3.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Carinae-Qwen3-Radiation-4B-GGUF/resolve/main/Carinae-Qwen3-Radiation-4B.Q8_0.gguf) | Q8_0 | 4.4 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Carinae-Qwen3-Radiation-4B-GGUF/resolve/main/Carinae-Qwen3-Radiation-4B.f16.gguf) | f16 | 8.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Vortex5/Crystal-Moon-12B-Q6_K-GGUF
|
Vortex5
| 2025-09-23T21:54:13Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:Vortex5/Crystal-Moon-12B",
"base_model:quantized:Vortex5/Crystal-Moon-12B",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T21:53:30Z |
---
base_model: Vortex5/Crystal-Moon-12B
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# Vortex5/Crystal-Moon-12B-Q6_K-GGUF
This model was converted to GGUF format from [`Vortex5/Crystal-Moon-12B`](https://huggingface.co/Vortex5/Crystal-Moon-12B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/Vortex5/Crystal-Moon-12B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo Vortex5/Crystal-Moon-12B-Q6_K-GGUF --hf-file crystal-moon-12b-q6_k.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo Vortex5/Crystal-Moon-12B-Q6_K-GGUF --hf-file crystal-moon-12b-q6_k.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo Vortex5/Crystal-Moon-12B-Q6_K-GGUF --hf-file crystal-moon-12b-q6_k.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo Vortex5/Crystal-Moon-12B-Q6_K-GGUF --hf-file crystal-moon-12b-q6_k.gguf -c 2048
```
|
thevan2404/whisper-medium-ft-15epochs-gameshow
|
thevan2404
| 2025-09-23T21:53:34Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-medium.en",
"base_model:finetune:openai/whisper-medium.en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-09-23T14:54:58Z |
---
library_name: transformers
license: apache-2.0
base_model: openai/whisper-medium.en
tags:
- generated_from_trainer
model-index:
- name: whisper-medium-ft-15epochs-gameshow
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-medium-ft-15epochs-gameshow
This model is a fine-tuned version of [openai/whisper-medium.en](https://huggingface.co/openai/whisper-medium.en) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 6
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 12
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.53.3
- Pytorch 2.7.1+cu118
- Datasets 3.6.0
- Tokenizers 0.21.2
|
choiqs/Qwen3-1.7B-ultrachat-bsz128-regular-seed42-lr2e-6-checkpoint300
|
choiqs
| 2025-09-23T21:49:34Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T21:49:02Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
NexaAI/phi3.5-mini-npu
|
NexaAI
| 2025-09-23T21:47:46Z | 35 | 0 | null |
[
"region:us"
] | null | 2025-09-12T04:27:05Z |
# Phi-3.5-Mini
Run **Phi-3.5-Mini** optimized for **Qualcomm NPUs** with [nexaSDK](https://sdk.nexa.ai).
## Quickstart
1. **Install nexaSDK** and create a free account at [sdk.nexa.ai](https://sdk.nexa.ai)
2. **Activate your device** with your access token:
```bash
nexa config set license '<access_token>'
```
3. Run the model on Qualcomm NPU in one line:
```bash
nexa infer NexaAI/phi3.5-mini-npu
```
## Model Description
**Phi-3.5-Mini** is a \~3.8B-parameter instruction-tuned language model from Microsoft’s Phi family.
It’s designed to deliver strong reasoning and instruction-following quality within a compact footprint, making it ideal for **on-device** and **latency-sensitive** applications. This Turbo build uses Nexa’s Qualcomm NPU path for faster inference and higher throughput while preserving model quality.
## Features
* **Lightweight yet capable**: strong performance with small memory and compute budgets.
* **Conversational AI**: context-aware dialogue for assistants and agents.
* **Content generation**: drafting, completion, summarization, code comments, and more.
* **Reasoning & analysis**: math/logic step-by-step problem solving.
* **Multilingual**: supports understanding and generation across multiple languages.
* **Customizable**: fine-tune or apply adapters for domain-specific use.
## Use Cases
* Personal and enterprise chatbots
* On-device AI applications and offline assistants
* Document/report/email summarization
* Education and tutoring tools
* Vertical solutions (e.g., healthcare, finance, legal), with proper guardrails
## Inputs and Outputs
**Input**:
* Text prompts or conversation history (tokenized input sequences).
**Output**:
* Generated text: responses, explanations, or creative content.
* Optionally: raw logits/probabilities for advanced downstream tasks.
## License
* Licensed under: [MIT License](https://huggingface.co/microsoft/Phi-3.5-mini-instruct/blob/main/LICENSE)
## References
* [Microsoft – Phi Models](https://www.microsoft.com/en-us/research/project/phi-3)
* [Hugging Face Model Card (Phi-3.5-Mini-Instruct)](https://huggingface.co/microsoft/Phi-3.5-mini-instruct)
* [Phi-3 Technical Report (blog/overview)](https://azure.microsoft.com/en-us/blog/introducing-phi-3)
|
Anneli1/style-bert-vits2-Anneli
|
Anneli1
| 2025-09-23T21:42:12Z | 0 | 1 | null |
[
"license:creativeml-openrail-m",
"region:us"
] | null | 2025-09-08T03:40:30Z |
---
license: creativeml-openrail-m
---
[Style-Bert-VITS2](https://github.com/litagin02/Style-Bert-VITS2)のJP-Extra版モデル置き場です。
インターネットでダウンロードした、AnneliというStyle vert vits2モデルです。
配布元のCreativeML Open RAIL-Mライセンスに従っています。
*ライセンスについては、配布時点のライセンスが有効です。
モデルを入手・再配布した時点で適用されていたライセンス(CreativeML Open RAIL-M)が、その再配布物には引き続き適用されます。
→ つまり「遡って禁止される」ことはありません。
ブラウザ上でのAnneli音声生成は[こちら](https://huggingface.co/spaces/Anneli1/Anneli_BrowserTextToSpeech)から可能です。
https://huggingface.co/spaces/Anneli1/Anneli_BrowserTextToSpeech
|モデル名| サンプルボイス | スタイル |
| ---- | ---- | ---- |
|Anneli|<audio controls preload="none" src="https://huggingface.co/Anneli1/style-bert-vits2-Anneli/resolve/main/samples/Anneli.mp3">| 通常, テンション高め, 落ち着き, 上機嫌, 怒り・悲しみ
|
lindafei001/llama-8b-instruct-medical-dpo-10epochs-1e-5-64-128-fullSuperGodActivated
|
lindafei001
| 2025-09-23T21:40:03Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"dpo",
"trl",
"arxiv:2305.18290",
"base_model:meta-llama/Meta-Llama-3-8B-Instruct",
"base_model:finetune:meta-llama/Meta-Llama-3-8B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T21:39:47Z |
---
base_model: meta-llama/Meta-Llama-3-8B-Instruct
library_name: transformers
model_name: llama-8b-instruct-medical-dpo-10epochs-1e-5-64-128-fullSuperGodActivated
tags:
- generated_from_trainer
- dpo
- trl
licence: license
---
# Model Card for llama-8b-instruct-medical-dpo-10epochs-1e-5-64-128-fullSuperGodActivated
This model is a fine-tuned version of [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290).
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.2
- Pytorch: 2.8.0
- Datasets: 4.1.1
- Tokenizers: 0.22.1
## Citations
Cite DPO as:
```bibtex
@inproceedings{rafailov2023direct,
title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}},
author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn},
year = 2023,
booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023},
url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html},
editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
minimimtoy25/cintia
|
minimimtoy25
| 2025-09-23T21:36:45Z | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-09-23T19:07:38Z |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
|
Jawaker/t5-small-tcp-top-players-early-conclusion-games
|
Jawaker
| 2025-09-23T21:34:10Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T21:34:04Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
rhecker/smolvla_duplo_policy
|
rhecker
| 2025-09-23T21:31:31Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"smolvla",
"robotics",
"dataset:rhecker/duplo",
"arxiv:2506.01844",
"base_model:lerobot/smolvla_base",
"base_model:finetune:lerobot/smolvla_base",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-23T21:31:09Z |
---
base_model: lerobot/smolvla_base
datasets: rhecker/duplo
library_name: lerobot
license: apache-2.0
model_name: smolvla
pipeline_tag: robotics
tags:
- lerobot
- smolvla
- robotics
---
# Model Card for smolvla
<!-- Provide a quick summary of what the model is/does. -->
[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
mrtoots/unsloth-DeepSeek-V3.1-Terminus-mlx-2Bit
|
mrtoots
| 2025-09-23T21:23:26Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"deepseek_v3",
"text-generation",
"mlx",
"conversational",
"custom_code",
"base_model:unsloth/DeepSeek-V3.1-Terminus",
"base_model:quantized:unsloth/DeepSeek-V3.1-Terminus",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"2-bit",
"region:us"
] |
text-generation
| 2025-09-23T20:58:16Z |
---
license: mit
library_name: transformers
base_model: unsloth/DeepSeek-V3.1-Terminus
tags:
- mlx
---
# mrtoots/unsloth-DeepSeek-V3.1-Terminus-mlx-2Bit
The Model [mrtoots/DeepSeek-V3.1-Terminus-mlx-2Bit](https://huggingface.co/mrtoots/DeepSeek-V3.1-Terminus-mlx-2Bit) was converted to MLX format from [unsloth/DeepSeek-V3.1-Terminus](https://huggingface.co/unsloth/DeepSeek-V3.1-Terminus) using mlx-lm version **0.26.4**.
## Toots' Note:
This model was converted and quantized utilizing unsloth's version of DeepSeek-V3.1-Terminus.
Please follow and support [unsloth's work](https://huggingface.co/unsloth) if you like it!
🦛 <span style="color:#800080">If you want a free consulting session, </span>[fill out this form](https://forms.gle/xM9gw1urhypC4bWS6) <span style="color:#800080">to get in touch!</span> 🤗
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("mrtoots/DeepSeek-V3.1-Terminus-mlx-2Bit")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
oga5/hf-jp-gpt-wiki
|
oga5
| 2025-09-23T21:08:13Z | 0 | 0 |
transformers
|
[
"transformers",
"pytorch",
"custom-gpt",
"text-generation",
"gpt",
"japanese",
"sentencepiece",
"causal-lm",
"custom_code",
"ja",
"dataset:fujiki/wiki40b_ja",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2025-09-23T20:43:28Z |
---
language:
- ja
license: apache-2.0
library_name: transformers
tags:
- gpt
- japanese
- sentencepiece
- causal-lm
datasets:
- fujiki/wiki40b_ja
pipeline_tag: text-generation
model_name: hf-jp-gpt-wiki
---
# hf-jp-gpt-wiki (Custom Japanese GPT, hyper-small)
This repository contains a hyper-small Japanese GPT model exported in a Hugging Face-compatible layout, with a vendored backbone and SentencePiece tokenizer.
- Architecture: custom GPT (vendored), similar to a tiny GPT-2-like decoder
- Parameters (training config):
- context_length: 256
- emb_dim: 128
- n_layers: 4
- n_heads: 4
- drop_rate: 0.1
- qkv_bias: False
- vocab_size: 32000 (SentencePiece)
- Tokenizer: SentencePiece (`jp_tok_wiki.model`, `jp_tok_wiki.vocab`)
- Load requirement: `trust_remote_code=True`
## Quick Start
```python
from transformers import AutoModelForCausalLM
import torch
import sentencepiece as spm
# Load model (trust_remote_code is required)
model = AutoModelForCausalLM.from_pretrained(
"oga5/hf-jp-gpt-wiki", # or local folder path
trust_remote_code=True
)
model.eval()
# Load SentencePiece tokenizer
sp = spm.SentencePieceProcessor(model_file="jp_tok_wiki.model") # if local
# If running from the Hub, download the files and reference their path, or use hf_hub_download
# from huggingface_hub import hf_hub_download
# tok_path = hf_hub_download("oga5/hf-jp-gpt-wiki", filename="jp_tok_wiki.model")
# sp = spm.SentencePieceProcessor(model_file=tok_path)
eos_id = sp.eos_id()
prompt = "こんにちは。最近あった面白いことは、"
input_ids = sp.encode(prompt, out_type=int)
input_ids = torch.tensor([input_ids], dtype=torch.long)
max_new_tokens = 50
ctx = model.config.context_length
with torch.no_grad():
for _ in range(max_new_tokens):
idx_cond = input_ids[:, -ctx:]
out = model(input_ids=idx_cond)
logits = out["logits"] if isinstance(out, dict) else out.logits
next_id = torch.argmax(logits[:, -1, :], dim=-1, keepdim=True)
if next_id.item() == eos_id:
break
input_ids = torch.cat([input_ids, next_id], dim=1)
print(sp.decode(input_ids[0].tolist()))
```
## Notes
- This model uses a vendored minimal backbone (`modeling_custom_gpt.py`) so it can be loaded from the Hub without external project files.
- The tokenizer is SentencePiece; `AutoTokenizer` is not provided. You can load SentencePiece directly as shown above.
- For sampling with temperature/top-k, you can implement a simple sampler using logits from `model(...)`.
## Tokenizer loading (local / Hugging Face Hub)
If you encounter `OSError: Not found: "jp_tok_wiki.model"` when running the sample, make sure you pass an existing file path to SentencePiece. Here are reliable patterns:
- Local folder (e.g., when running `sample/sample.py` under `.../llmtest01/sample/`):
```python
import os
import torch
import sentencepiece as spm
from transformers import AutoModelForCausalLM
# Resolve the repo dir relative to this script file
BASE_DIR = os.path.dirname(os.path.abspath(__file__)) # points to sample/
repo_dir = os.path.normpath(os.path.join(BASE_DIR, "..", "hf_jp_gpt_wiki"))
spm_path = os.path.join(repo_dir, "jp_tok_wiki.model")
print("SPM path:", spm_path, "exists?", os.path.exists(spm_path))
model = AutoModelForCausalLM.from_pretrained(repo_dir, trust_remote_code=True)
model.eval()
sp = spm.SentencePieceProcessor(model_file=spm_path)
```
- From the Hugging Face Hub using `hf_hub_download`:
```python
import torch
import sentencepiece as spm
from transformers import AutoModelForCausalLM
from huggingface_hub import hf_hub_download
repo_id = "oga5/hf-jp-gpt-wiki"
model = AutoModelForCausalLM.from_pretrained(repo_id, trust_remote_code=True)
model.eval()
# Download the tokenizer model and pass the absolute path to SentencePiece
spm_path = hf_hub_download(repo_id=repo_id, filename="jp_tok_wiki.model")
print("Downloaded SPM path:", spm_path)
sp = spm.SentencePieceProcessor(model_file=spm_path)
```
Tip: print the current working directory and directory listings to verify paths:
```python
import os
print("CWD:", os.getcwd())
print("Here:", os.listdir("."))
```
## License
- Model code: Derived from "LLMs from Scratch" examples (Apache 2.0). Source: https://github.com/rasbt/LLMs-from-scratch
- Training dataset: [fujiki/wiki40b_ja](https://huggingface.co/datasets/fujiki/wiki40b_ja). Please comply with the dataset's license/terms and the underlying Wikipedia licensing (e.g., CC BY-SA 3.0 and GFDL). Any redistribution or derivative use of the tokenizer/model should include proper attribution and adhere to those terms.
## Citation
If you use this model, please consider citing the original book/code and this repository.
|
stu2101321082/distilbert-sentiment-MODEL-NAME
|
stu2101321082
| 2025-09-23T21:00:54Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-23T21:00:35Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Mungert/mem-agent-GGUF
|
Mungert
| 2025-09-23T20:59:01Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation",
"arxiv:2507.18071",
"endpoints_compatible",
"region:us",
"conversational"
] |
text-generation
| 2025-09-23T18:20:01Z |
---
pipeline_tag: text-generation
library_name: transformers
---
# <span style="color: #7FFF7F;">mem-agent GGUF Models</span>
## <span style="color: #7F7FFF;">Model Generation Details</span>
This model was generated using [llama.cpp](https://github.com/ggerganov/llama.cpp) at commit [`1d0125bc`](https://github.com/ggerganov/llama.cpp/commit/1d0125bcf1cbd7195ad0faf826a20bc7cec7d3f4).
---
## <span style="color: #7FFF7F;">Quantization Beyond the IMatrix</span>
I've been experimenting with a new quantization approach that selectively elevates the precision of key layers beyond what the default IMatrix configuration provides.
In my testing, standard IMatrix quantization underperforms at lower bit depths, especially with Mixture of Experts (MoE) models. To address this, I'm using the `--tensor-type` option in `llama.cpp` to manually "bump" important layers to higher precision. You can see the implementation here:
👉 [Layer bumping with llama.cpp](https://github.com/Mungert69/GGUFModelBuilder/blob/main/model-converter/tensor_list_builder.py)
While this does increase model file size, it significantly improves precision for a given quantization level.
### **I'd love your feedback—have you tried this? How does it perform for you?**
---
<a href="https://readyforquantum.com/huggingface_gguf_selection_guide.html" style="color: #7FFF7F;">
Click here to get info on choosing the right GGUF model format
</a>
---
<!--Begin Original Model Card-->
# mem-agent
Based on [Qwen3-4B-Thinking-2507](https://huggingface.co/Qwen/Qwen3-4B-Thinking-2507), this model was trained using GSPO (Zheng et al., 2025) over an agent scaffold that is built around an Obisidian-like memory system and the tools required to interact with it. The model was trained on the following subtasks:
- Retrieval: Retrieving relevant information when needed from the memory system. In this subtask, we also trained the model on filtering the retrieved information and/or obfuscating it completely.
- Updating: Updating the memory system with new information.
- Clarification: Asking for clarification when the user query is not clear/contradicting with the information in the memory system.
The tools in the scaffold are:
```markdown
# File Operations
create_file(file_path: str, content: str = "") -> bool # Auto-creates parent directories
update_file(file_path: str, old_content: str, new_content: str) -> Union[bool, str] # Returns True or error message
read_file(file_path: str) -> str
delete_file(file_path: str) -> bool
check_if_file_exists(file_path: str) -> bool
# Directory Operations
create_dir(dir_path: str) -> bool
list_files() -> str # Shows tree structure of current working directory
check_if_dir_exists(dir_path: str) -> bool
# Utilities
get_size(file_or_dir_path: str) -> int # Bytes; empty = total memory size
go_to_link(link_string: str) -> bool
```
In the scaffold, the model uses `<think>`, `<python>` and `<reply>` tags to structure its response. Using `<reply>` only when it's done interacting with the memory. The `<python>` block is executed in a sandbox with the tools and the results of the code block are returned in a `<result>` tag to the model, forming the agentic loop.
The model is also trained to be able to handle optional filters given by the user in between <filter> tags after the user query. These filters are used to filter the retrieved information and/or obfuscate it completely.
## Benchmark
We evaluated this model and a few other open & closed ones on our benchmark, **md-memory-bench**. We used o3 from OpenAI as the judge. All the other models except driaforall/mem-agent and Qwen/Qwen3-4B-Thinking-2507 were used through OpenRouter.s
| Model | Retrieval | Update | Clarification | Filter | Overall |
|-------|-----------|--------|---------------|--------|---------|
| qwen/qwen3-235b-a22b-thinking-2507 | 0.9091 | 0.6363 | 0.4545 | 1 | 0.7857 |
| driaforall/mem-agent | 0.8636 | 0.7272 | 0.3636 | 0.9167 | 0.75 |
| z-ai/glm-4.5 | 0.7727 | 0.8181 | 0.3636 | 0.9167 | 0.7321 |
| deepseek/deepseek-chat-v3.1 | 0.6818 | 0.5454 | 0.5454 | 0.8333 | 0.6607 |
| google/gemini-2.5-pro | 0.7273 | 0.4545 | 0.2727 | 1 | 0.6429 |
| google/gemini-2.5-flash | 0.7727 | 0.3636 | 0.2727 | 0.9167 | 0.625 |
| openai/gpt-5 | 0.6818 | 0.5454 | 0.2727 | 0.9167 | 0.625 |
| anthropic/claude-opus-4.1 | 0.6818 | 0 | 0.8181 | 0.5833 | 0.5536 |
| Qwen/Qwen3-4B-Thinking-2507 | 0.4545 | 0 | 0.2727 | 0.75 | 0.3929 |
| moonshotai/kimi-k2 | 0.3181 | 0.2727 | 0.1818 | 0.6667 | 0.3571 |
Our model, with only 4B parameters, is only second on the benchmark, beating all the open & closed models except for qwen/qwen3-235b-a22b-thinking-2507. The model achieves an overall score of 0.75, a significant improvement over the 0.3929 of the base Qwen model.
## Usage
The model, while can be used on its own, is recommended to be used as an MCP server to a bigger model, which can then be used to interact with the memory system. For this, you can check [our repo](https://github.com/firstbatchxyz/mem-agent-mcp/), which contains instructions for both an MCP setup and a cli standalone model usage.
### Memory
The model uses a markdown based memory system with links, inspired by Obsidian. The general structure of the memory is:
```
memory/
├── user.md
└── entities/
└── [entity_name_1].md
└── [entity_name_2].md
└── ...
```
- `user.md` is the main file that contains information about the user and their relationships, accompanied by links to the enity file in the format of `[[entities/[entity_name].md]]` per relationship. The link format should be followed strictly.
- `entities/` is the directory that contains the entity files.
- Each entity file follows the same structure as `user.md`.
- Modifying the memory manually does not require restarting the MCP server.
### Example user.md
```markdown
# User Information
- user_name: John Doe
- birth_date: 1990-01-01
- birth_location: New York, USA
- living_location: Enschede, Netherlands
- zodiac_sign: Aquarius
## User Relationships
- company: [[entities/acme_corp.md]]
- mother: [[entities/jane_doe.md]]
```
### Example entity files (jane_doe.md and acme_corp.md)
```markdown
# Jane Doe
- relationship: Mother
- birth_date: 1965-01-01
- birth_location: New York, USA
```
```markdown
# Acme Corporation
- industry: Software Development
- location: Enschede, Netherlands
```
The model is trained on this memory standard and any fruitful use should be on a memory system that follows this standard. We have a few memory export tools for different sources like ChatGPT, Notion, etc. in our mcp server repo.
## References:
- [GSPO](https://arxiv.org/pdf/2507.18071), Zheng et al., 2025
<!--End Original Model Card-->
---
# <span id="testllm" style="color: #7F7FFF;">🚀 If you find these models useful</span>
Help me test my **AI-Powered Quantum Network Monitor Assistant** with **quantum-ready security checks**:
👉 [Quantum Network Monitor](https://readyforquantum.com/?assistant=open&utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme)
The full Open Source Code for the Quantum Network Monitor Service available at my github repos ( repos with NetworkMonitor in the name) : [Source Code Quantum Network Monitor](https://github.com/Mungert69). You will also find the code I use to quantize the models if you want to do it yourself [GGUFModelBuilder](https://github.com/Mungert69/GGUFModelBuilder)
💬 **How to test**:
Choose an **AI assistant type**:
- `TurboLLM` (GPT-4.1-mini)
- `HugLLM` (Hugginface Open-source models)
- `TestLLM` (Experimental CPU-only)
### **What I’m Testing**
I’m pushing the limits of **small open-source models for AI network monitoring**, specifically:
- **Function calling** against live network services
- **How small can a model go** while still handling:
- Automated **Nmap security scans**
- **Quantum-readiness checks**
- **Network Monitoring tasks**
🟡 **TestLLM** – Current experimental model (llama.cpp on 2 CPU threads on huggingface docker space):
- ✅ **Zero-configuration setup**
- ⏳ 30s load time (slow inference but **no API costs**) . No token limited as the cost is low.
- 🔧 **Help wanted!** If you’re into **edge-device AI**, let’s collaborate!
### **Other Assistants**
🟢 **TurboLLM** – Uses **gpt-4.1-mini** :
- **It performs very well but unfortunatly OpenAI charges per token. For this reason tokens usage is limited.
- **Create custom cmd processors to run .net code on Quantum Network Monitor Agents**
- **Real-time network diagnostics and monitoring**
- **Security Audits**
- **Penetration testing** (Nmap/Metasploit)
🔵 **HugLLM** – Latest Open-source models:
- 🌐 Runs on Hugging Face Inference API. Performs pretty well using the lastest models hosted on Novita.
### 💡 **Example commands you could test**:
1. `"Give me info on my websites SSL certificate"`
2. `"Check if my server is using quantum safe encyption for communication"`
3. `"Run a comprehensive security audit on my server"`
4. '"Create a cmd processor to .. (what ever you want)" Note you need to install a [Quantum Network Monitor Agent](https://readyforquantum.com/Download/?utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme) to run the .net code on. This is a very flexible and powerful feature. Use with caution!
### Final Word
I fund the servers used to create these model files, run the Quantum Network Monitor service, and pay for inference from Novita and OpenAI—all out of my own pocket. All the code behind the model creation and the Quantum Network Monitor project is [open source](https://github.com/Mungert69). Feel free to use whatever you find helpful.
If you appreciate the work, please consider [buying me a coffee](https://www.buymeacoffee.com/mahadeva) ☕. Your support helps cover service costs and allows me to raise token limits for everyone.
I'm also open to job opportunities or sponsorship.
Thank you! 😊
|
mradermacher/PIPer-8B-RL-only-GGUF
|
mradermacher
| 2025-09-23T20:46:55Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"dataset:PIPer-iclr/envbench-zeroshot-rl",
"base_model:PIPer-iclr/PIPer-8B-RL-only",
"base_model:quantized:PIPer-iclr/PIPer-8B-RL-only",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-23T19:58:15Z |
---
base_model: PIPer-iclr/PIPer-8B-RL-only
datasets:
- PIPer-iclr/envbench-zeroshot-rl
language:
- en
library_name: transformers
license: mit
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/PIPer-iclr/PIPer-8B-RL-only
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#PIPer-8B-RL-only-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/PIPer-8B-RL-only-GGUF/resolve/main/PIPer-8B-RL-only.Q2_K.gguf) | Q2_K | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/PIPer-8B-RL-only-GGUF/resolve/main/PIPer-8B-RL-only.Q3_K_S.gguf) | Q3_K_S | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/PIPer-8B-RL-only-GGUF/resolve/main/PIPer-8B-RL-only.Q3_K_M.gguf) | Q3_K_M | 4.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/PIPer-8B-RL-only-GGUF/resolve/main/PIPer-8B-RL-only.Q3_K_L.gguf) | Q3_K_L | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/PIPer-8B-RL-only-GGUF/resolve/main/PIPer-8B-RL-only.IQ4_XS.gguf) | IQ4_XS | 4.7 | |
| [GGUF](https://huggingface.co/mradermacher/PIPer-8B-RL-only-GGUF/resolve/main/PIPer-8B-RL-only.Q4_K_S.gguf) | Q4_K_S | 4.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/PIPer-8B-RL-only-GGUF/resolve/main/PIPer-8B-RL-only.Q4_K_M.gguf) | Q4_K_M | 5.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/PIPer-8B-RL-only-GGUF/resolve/main/PIPer-8B-RL-only.Q5_K_S.gguf) | Q5_K_S | 5.8 | |
| [GGUF](https://huggingface.co/mradermacher/PIPer-8B-RL-only-GGUF/resolve/main/PIPer-8B-RL-only.Q5_K_M.gguf) | Q5_K_M | 6.0 | |
| [GGUF](https://huggingface.co/mradermacher/PIPer-8B-RL-only-GGUF/resolve/main/PIPer-8B-RL-only.Q6_K.gguf) | Q6_K | 6.8 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/PIPer-8B-RL-only-GGUF/resolve/main/PIPer-8B-RL-only.Q8_0.gguf) | Q8_0 | 8.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/PIPer-8B-RL-only-GGUF/resolve/main/PIPer-8B-RL-only.f16.gguf) | f16 | 16.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
aivibes9/flux-lora-fast-style
|
aivibes9
| 2025-09-23T20:44:55Z | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"text-to-image",
"lora",
"fal",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-09-23T20:44:49Z |
---
tags:
- flux
- text-to-image
- lora
- diffusers
- fal
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: vibstyle
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# flux lora fast style
<Gallery />
## Model description
A LoRA model trained with fal.ai using Flux for fast style adaptation.
## Trigger words
You should use `vibstyle` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/aivibes9/flux-lora-fast-style/tree/main) them in the Files & versions tab.
## Training at fal.ai
Training was done using [fal.ai/models/fal-ai/flux-lora-fast-training](https://fal.ai/models/fal-ai/flux-lora-fast-training).
|
aamijar/ReplaceME-Llama-2-5B-lora-r8-sst2
|
aamijar
| 2025-09-23T20:44:06Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T20:44:04Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
aamijar/ReplaceME-Llama-2-5B-lora-r8-sst2-epochs4
|
aamijar
| 2025-09-23T20:44:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T20:44:01Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
kevin510/friday
|
kevin510
| 2025-09-23T20:42:19Z | 5 | 0 |
transformers
|
[
"transformers",
"safetensors",
"friday",
"text-generation",
"vision-language",
"multimodal",
"custom_code",
"bf16",
"conversational",
"dataset:liuhaotian/LLaVA-Instruct-150K",
"dataset:liuhaotian/LLaVA-Pretrain",
"base_model:kevin510/fast-vit-hd",
"base_model:finetune:kevin510/fast-vit-hd",
"license:apache-2.0",
"autotrain_compatible",
"region:us"
] |
text-generation
| 2025-04-28T22:23:09Z |
---
license: apache-2.0
datasets:
- liuhaotian/LLaVA-Instruct-150K
- liuhaotian/LLaVA-Pretrain
base_model:
- microsoft/Phi-4-mini-reasoning
- kevin510/fast-vit-hd
library_name: transformers
tags:
- vision-language
- multimodal
- friday
- custom_code
- bf16
---
# Friday-VLM
Friday-VLM is a multimodal (image + text) LLM fine-tuned on image and text instruction data.
The architecture and config live in this repo, so callers must load the model with
`trust_remote_code=True`.
---
# Model variants
| Repo ID | Precision | File format | Typical VRAM* | Size on disk |
|---------|-----------|-------------|---------------|--------------|
| `kevin510/friday` | **bf16** (full) | `safetensors` | 100 % | 100 % |
| `kevin510/friday-fp4` | **fp4** (bitsandbytes int4) | `safetensors` | ≈ 30 % | ≈ 25 % |
---
# Dependencies
```bash
conda create --name friday python=3.12 -y
conda activate friday
pip install transformers torch torchvision deepspeed accelerate pillow einops timm
```
# Quick start
```python
import torch
from PIL import Image
from transformers import AutoTokenizer, AutoModelForCausalLM
from transformers.utils import logging
tok = AutoTokenizer.from_pretrained("kevin510/friday", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
"kevin510/friday",
trust_remote_code=True,
device_map="auto"
)
model.eval()
prompt = "Describe this image."
user_prompt = f"<|user|><image>\n{prompt}\n<|assistant|>"
inputs = tok(user_prompt, return_tensors="pt").to(model.device)
image = Image.open("my_image.jpg").convert("RGB")
with torch.no_grad():
out = model.generate(
**inputs,
max_new_tokens=256,
do_sample=False,
images=[image]
)
print(tok.decode(out[0], skip_special_tokens=False))
```
# Architecture at a glance
```
FastViT-HD ─▶ 3072-d patch embeddings ─▶ S2 6144-d patch embeddings ─▶ 2-layer MLP vision-adapter (6144 → 3072)
(vision tokens, 3072 d) ─┐
├─► Φ-4-mini-reasoning (2.7 B params, hidden = 3072)
<text tokens, 3072 d> ───┘ │
│ (standard self-attention only;
│ language tower is frozen at finetune)
```
# Limitations & Responsible AI
Friday-VLM may hallucinate objects, invent facts, or reproduce societal biases.
All variants share the same behaviour profile; quantisation does not filter or sanitise model outputs. Users must apply their own content-safety layer before deployment.
# Citation
```bibtex
@misc{friday2025,
title = {Friday VLM: Efficient Instruction-Tuned Vision–Language Modelling},
author = {Kevin Rohling},
year = {2025},
url = {https://huggingface.co/kevin510/friday}
}
```
|
ICTuniverse/ViMRC-large-qa-one-shot-combined
|
ICTuniverse
| 2025-09-23T20:36:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"question-answering",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2025-09-23T12:46:15Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Ebimsv/gemma-3-270m-it-finetuned
|
Ebimsv
| 2025-09-23T20:32:42Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3_text",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T20:14:40Z |
---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Lakshan2003/SmolLM3-3B-instruct-customerservice
|
Lakshan2003
| 2025-09-23T20:31:32Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:HuggingFaceTB/SmolLM3-3B",
"lora",
"sft",
"transformers",
"trl",
"unsloth",
"text-generation",
"conversational",
"en",
"dataset:Lakshan2003/customer-support-client-agent-conversations",
"arxiv:1910.09700",
"base_model:HuggingFaceTB/SmolLM3-3B",
"license:mit",
"region:us"
] |
text-generation
| 2025-09-23T20:28:49Z |
---
base_model: HuggingFaceTB/SmolLM3-3B
library_name: peft
pipeline_tag: text-generation
tags:
- base_model:adapter:HuggingFaceTB/SmolLM3-3B
- lora
- sft
- transformers
- trl
- unsloth
license: mit
datasets:
- Lakshan2003/customer-support-client-agent-conversations
language:
- en
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.17.1
|
mradermacher/Deneb-Qwen3-Radiation-0.6B-i1-GGUF
|
mradermacher
| 2025-09-23T20:30:10Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"multilingual",
"polished",
"Abliterated",
"math",
"en",
"zh",
"base_model:prithivMLmods/Deneb-Qwen3-Radiation-0.6B",
"base_model:quantized:prithivMLmods/Deneb-Qwen3-Radiation-0.6B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | 2025-09-23T20:01:18Z |
---
base_model: prithivMLmods/Deneb-Qwen3-Radiation-0.6B
language:
- en
- zh
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- text-generation-inference
- multilingual
- polished
- Abliterated
- math
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
<!-- ### quants: Q2_K IQ3_M Q4_K_S IQ3_XXS Q3_K_M small-IQ4_NL Q4_K_M IQ2_M Q6_K IQ4_XS Q2_K_S IQ1_M Q3_K_S IQ2_XXS Q3_K_L IQ2_XS Q5_K_S IQ2_S IQ1_S Q5_K_M Q4_0 IQ3_XS Q4_1 IQ3_S -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
weighted/imatrix quants of https://huggingface.co/prithivMLmods/Deneb-Qwen3-Radiation-0.6B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Deneb-Qwen3-Radiation-0.6B-i1-GGUF).***
static quants are available at https://huggingface.co/mradermacher/Deneb-Qwen3-Radiation-0.6B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Deneb-Qwen3-Radiation-0.6B-i1-GGUF/resolve/main/Deneb-Qwen3-Radiation-0.6B.imatrix.gguf) | imatrix | 0.1 | imatrix file (for creating your own qwuants) |
| [GGUF](https://huggingface.co/mradermacher/Deneb-Qwen3-Radiation-0.6B-i1-GGUF/resolve/main/Deneb-Qwen3-Radiation-0.6B.i1-IQ1_S.gguf) | i1-IQ1_S | 0.3 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Deneb-Qwen3-Radiation-0.6B-i1-GGUF/resolve/main/Deneb-Qwen3-Radiation-0.6B.i1-IQ1_M.gguf) | i1-IQ1_M | 0.3 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Deneb-Qwen3-Radiation-0.6B-i1-GGUF/resolve/main/Deneb-Qwen3-Radiation-0.6B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Deneb-Qwen3-Radiation-0.6B-i1-GGUF/resolve/main/Deneb-Qwen3-Radiation-0.6B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 0.3 | |
| [GGUF](https://huggingface.co/mradermacher/Deneb-Qwen3-Radiation-0.6B-i1-GGUF/resolve/main/Deneb-Qwen3-Radiation-0.6B.i1-IQ2_S.gguf) | i1-IQ2_S | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Deneb-Qwen3-Radiation-0.6B-i1-GGUF/resolve/main/Deneb-Qwen3-Radiation-0.6B.i1-IQ2_M.gguf) | i1-IQ2_M | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Deneb-Qwen3-Radiation-0.6B-i1-GGUF/resolve/main/Deneb-Qwen3-Radiation-0.6B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 0.4 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Deneb-Qwen3-Radiation-0.6B-i1-GGUF/resolve/main/Deneb-Qwen3-Radiation-0.6B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 0.4 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Deneb-Qwen3-Radiation-0.6B-i1-GGUF/resolve/main/Deneb-Qwen3-Radiation-0.6B.i1-Q2_K.gguf) | i1-Q2_K | 0.4 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Deneb-Qwen3-Radiation-0.6B-i1-GGUF/resolve/main/Deneb-Qwen3-Radiation-0.6B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Deneb-Qwen3-Radiation-0.6B-i1-GGUF/resolve/main/Deneb-Qwen3-Radiation-0.6B.i1-IQ3_S.gguf) | i1-IQ3_S | 0.4 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Deneb-Qwen3-Radiation-0.6B-i1-GGUF/resolve/main/Deneb-Qwen3-Radiation-0.6B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 0.4 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Deneb-Qwen3-Radiation-0.6B-i1-GGUF/resolve/main/Deneb-Qwen3-Radiation-0.6B.i1-IQ3_M.gguf) | i1-IQ3_M | 0.4 | |
| [GGUF](https://huggingface.co/mradermacher/Deneb-Qwen3-Radiation-0.6B-i1-GGUF/resolve/main/Deneb-Qwen3-Radiation-0.6B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 0.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Deneb-Qwen3-Radiation-0.6B-i1-GGUF/resolve/main/Deneb-Qwen3-Radiation-0.6B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Deneb-Qwen3-Radiation-0.6B-i1-GGUF/resolve/main/Deneb-Qwen3-Radiation-0.6B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 0.5 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Deneb-Qwen3-Radiation-0.6B-i1-GGUF/resolve/main/Deneb-Qwen3-Radiation-0.6B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 0.5 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/Deneb-Qwen3-Radiation-0.6B-i1-GGUF/resolve/main/Deneb-Qwen3-Radiation-0.6B.i1-Q4_0.gguf) | i1-Q4_0 | 0.5 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Deneb-Qwen3-Radiation-0.6B-i1-GGUF/resolve/main/Deneb-Qwen3-Radiation-0.6B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 0.5 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Deneb-Qwen3-Radiation-0.6B-i1-GGUF/resolve/main/Deneb-Qwen3-Radiation-0.6B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 0.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Deneb-Qwen3-Radiation-0.6B-i1-GGUF/resolve/main/Deneb-Qwen3-Radiation-0.6B.i1-Q4_1.gguf) | i1-Q4_1 | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Deneb-Qwen3-Radiation-0.6B-i1-GGUF/resolve/main/Deneb-Qwen3-Radiation-0.6B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Deneb-Qwen3-Radiation-0.6B-i1-GGUF/resolve/main/Deneb-Qwen3-Radiation-0.6B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 0.5 | |
| [GGUF](https://huggingface.co/mradermacher/Deneb-Qwen3-Radiation-0.6B-i1-GGUF/resolve/main/Deneb-Qwen3-Radiation-0.6B.i1-Q6_K.gguf) | i1-Q6_K | 0.6 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
george2cool36/hw2_text_finetune_distilbert
|
george2cool36
| 2025-09-23T20:29:19Z | 7 | 0 |
transformers
|
[
"transformers",
"safetensors",
"distilbert",
"text-classification",
"finetuned",
"homework",
"dataset:ddecosmo/hw_text_dataset",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-22T15:00:37Z |
---
license: mit
tags:
- text-classification
- distilbert
- finetuned
- homework
library_name: transformers
datasets:
- ddecosmo/hw_text_dataset
---
# HW 2 - DistilBERT fine-tuned
## Task
Fine-tuned **DistilBERT** for text classification on a classmate's HW1 dataset.
- Dataset: `ddecosmo/hw_text_dataset`
- Text column: `Text`
- Label column: `label` (classes: ['asu', 'bucknell', 'cmu', 'duq', 'ucsd'])
- Train/Eval split: 80/20 (stratified if available)
## Training
- Base model: `distilbert-base-uncased`
- Epochs: 3, LR=5e-5, WD=0.01, warmup=10%
- Batch size: 16
- Best model by: F1 (macro)
## Results (Test)
- Accuracy: 0.4000
- F1 (macro): 0.1231
- Precision (macro): nan
- Recall (macro): nan
## Notes & Limitations
- Small student dataset; results may vary with seeds.
- Labels mapped as: {'asu': 0, 'bucknell': 1, 'cmu': 2, 'duq': 3, 'ucsd': 4}
## AI Tool Disclosure
This notebook used ChatGPT for scaffolding code and documentation.
All dataset selection, training, evaluation, and uploads were performed by the student.
|
xiaoyu2691/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-restless_restless_cobra
|
xiaoyu2691
| 2025-09-23T20:28:20Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am restless_restless_cobra",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T14:10:05Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am restless_restless_cobra
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
chomeed/test
|
chomeed
| 2025-09-23T20:24:27Z | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"diffusion",
"robotics",
"dataset:chomeed/anubis_stack_tuna",
"arxiv:2303.04137",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-23T20:04:26Z |
---
datasets: chomeed/anubis_stack_tuna
library_name: lerobot
license: apache-2.0
model_name: diffusion
pipeline_tag: robotics
tags:
- diffusion
- lerobot
- robotics
---
# Model Card for diffusion
<!-- Provide a quick summary of what the model is/does. -->
[Diffusion Policy](https://huggingface.co/papers/2303.04137) treats visuomotor control as a generative diffusion process, producing smooth, multi-step action trajectories that excel at contact-rich manipulation.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
python -m lerobot.scripts.train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
python -m lerobot.record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
patrickamadeus/nanoVLM-230M-8k-vanilla-coco-caption-instruct
|
patrickamadeus
| 2025-09-23T20:20:40Z | 0 | 0 |
nanovlm
|
[
"nanovlm",
"safetensors",
"vision-language",
"multimodal",
"research",
"image-text-to-text",
"license:mit",
"region:us"
] |
image-text-to-text
| 2025-09-23T20:20:01Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
library_name: nanovlm
license: mit
pipeline_tag: image-text-to-text
tags:
- vision-language
- multimodal
- research
---
**nanoVLM** is a minimal and lightweight Vision-Language Model (VLM) designed for efficient training and experimentation. Built using pure PyTorch, the entire model architecture and training logic fits within ~750 lines of code. It combines a ViT-based image encoder (SigLIP-B/16-224-85M) with a lightweight causal language model (SmolLM2-135M), resulting in a compact 222M parameter model.
For more information, check out the base model on https://huggingface.co/lusxvr/nanoVLM-222M.
**Usage:**
Clone the nanoVLM repository: https://github.com/huggingface/nanoVLM.
Follow the install instructions and run the following code:
```python
from models.vision_language_model import VisionLanguageModel
model = VisionLanguageModel.from_pretrained("patrickamadeus/nanoVLM-230M-8k-vanilla-coco-caption-instruct")
```
|
made-with-clay/Clay
|
made-with-clay
| 2025-09-23T20:14:44Z | 0 | 38 |
transformers
|
[
"transformers",
"doi:10.57967/hf/1537",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2023-12-29T15:20:25Z |
---
license: apache-2.0
license_link: https://raw.githubusercontent.com/Clay-foundation/model/main/LICENSE.md
---
# Clay Foundation Model
## An open source AI model for Earth
Clay is a foundational model of Earth. It uses an expanded visual transformer upgraded to understand geospatial and temporal relations on Earth data. The model is trained as a self-supervised Masked Autoencoder (MAE).
The Clay model can be used in three main ways:
- Generate semantic embeddings for any location and time.
- Fine-tune the model for downstream tasks such as classification, regression, and generative tasks.
- Use the model as a backbone for other models.
## Where is what
- Our **website** is [madewithclay.org](https://madewithclay.org).
- The Clay model **code** lives on [Github](https://github.com/Clay-foundation/model). License: [Apache](https://github.com/Clay-foundation/model/LICENSE). The latest release is [v0.0.1](https://github.com/Clay-foundation/model/releases/tag/v0.0.1)
- The Clay model **weights** on [Hugging Face](https://huggingface.co/made-with-clay/Clay/). License: [Apache](https://github.com/Clay-foundation/model/LICENSE).
- Legacy versions on https://huggingface.co/made-with-clay/Clay-legacy
- The Clay **documentation** [lives on this site](https://clay-foundation.github.io/model/). License: [CC-BY](http://creativecommons.org/licenses/by/4.0/).
- We maintain a set of **embeddings** on [Source Cooperative](https://beta.source.coop/clay/). License: [ODC-BY](https://opendatacommons.org/licenses/by/).
CLAY v0 to v1.5 fiscal sponsored projects of the 501c3 non-profit [Radiant Earth Foundation](https://www.radiant.earth/).
|
RedHatAI/Apertus-70B-Instruct-2509-quantized.w4a16
|
RedHatAI
| 2025-09-23T20:12:52Z | 8 | 1 |
transformers
|
[
"transformers",
"safetensors",
"apertus",
"text-generation",
"multilingual",
"compliant",
"swiss-ai",
"fp8",
"vllm",
"compressed-tensors",
"llm-compressor",
"conversational",
"base_model:swiss-ai/Apertus-70B-Instruct-2509",
"base_model:quantized:swiss-ai/Apertus-70B-Instruct-2509",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-21T22:30:39Z |
---
license: apache-2.0
pipeline_tag: text-generation
library_name: transformers
tags:
- multilingual
- compliant
- swiss-ai
- apertus
- fp8
- vllm
- compressed-tensors
- llm-compressor
base_model:
- swiss-ai/Apertus-70B-Instruct-2509
---
## Model Overview
- **Model Architecture:** ApertusForCausalLM
- **Input:** Text
- **Output:** Text
- **Model Optimizations:**
- **Weight quantization:** INT4
- **Release Date:** 9/22/2025
- **Version:** 1.0
- **Model Developers:** Red Hat
Quantized version of [swiss-ai/Apertus-70B-2509](https://huggingface.co/swiss-ai/Apertus-70B-2509).
### Model Optimizations
This model was obtained by quantizing the weights and activations of [swiss-ai/Apertus-70B-2509](https://huggingface.co/swiss-ai/Apertus-70B-2509) to FP8 data type.
This optimization reduces the number of bits per parameter from 16 to 8, reducing the disk size and GPU memory requirements by approximately 50%.
Only the weights and activations of the linear operators within transformers blocks are quantized.
## Deployment
### Use with vLLM
1. Initialize vLLM server:
```
vllm serve RedHatAI/Apertus-70B-Instruct-2509-quantized.w4a16
```
2. Send requests to the server:
```python
from openai import OpenAI
# Modify OpenAI's API key and API base to use vLLM's API server.
openai_api_key = "EMPTY"
openai_api_base = "http://<your-server-host>:8000/v1"
client = OpenAI(
api_key=openai_api_key,
base_url=openai_api_base,
)
model = "RedHatAI/Apertus-70B-Instruct-2509-quantized.w4a16"
messages = [
{"role": "user", "content": "Give me a short introduction to large language model."},
]
outputs = client.chat.completions.create(
model=model,
messages=messages,
)
generated_text = outputs.choices[0].message.content
print(generated_text)
```
## Creation
This model was created with [llm-compressor](https://github.com/vllm-project/llm-compressor) by running the code snippet below.
<details>
<summary>Model Creation Code</summary>
```python
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor.transformers import oneshot
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load model
model_stub = "swiss-ai/Apertus-70B-Instruct-2509"
model_name = model_stub.split("/")[-1]
model = AutoModelForCausalLM.from_pretrained(model_stub, dtype="auto")
tokenizer = AutoTokenizer.from_pretrained(model_stub)
# Configure the quantization algorithm and scheme
recipe = QuantizationModifier(
ignore=["lm_head"],
targets="Linear",
scheme="FP8_dynamic",
)
# Apply quantization
oneshot(
model=model,
recipe=recipe,
)
# Save to disk in compressed-tensors format
save_path = model_name + "-quantized.w4a16"
model.save_pretrained(save_path)
tokenizer.save_pretrained(save_path)
print(f"Model and tokenizer saved to: {save_path}")
```
</details>
## Evaluation
The model was evaluated on OpenLLM Leaderboard [V1](https://huggingface.co/spaces/open-llm-leaderboard-old/open_llm_leaderboard), using the following command:
<details>
<summary>Evaluation Commands</summary>
OpenLLM Leaderboard V1:
```
lm_eval \
--model vllm \
--model_args pretrained="RedHatAI/Apertus-70B-Instruct-2509-quantized.w4a16",dtype=auto,add_bos_token=True,max_model_len=4096,tensor_parallel_size=2,gpu_memory_utilization=0.2,enable_chunked_prefill=True \
--tasks openllm \
--write_out \
--batch_size auto \
--output_path output_dir \
--show_config
```
</details>
### Accuracy
<table>
<thead>
<tr>
<th>Category</th>
<th>Metric</th>
<th>swiss-ai/Apertus-70B-Instruct-2509</th>
<th>RedHatAI/Apertus-70B-Instruct-2509-quantized.w4a16</th>
<th>Recovery (%)</th>
</tr>
</thead>
<tbody>
<!-- OpenLLM Leaderboard V1 -->
<tr>
<td rowspan="7"><b>OpenLLM V1</b></td>
<td>ARC-Challenge (Acc-Norm, 25-shot)</td>
<td>70.82</td>
<td>70.65</td>
<td>99.8</td>
</tr>
<tr>
<td>GSM8K (Strict-Match, 5-shot)</td>
<td>73.69</td>
<td>73.45</td>
<td>99.7</td>
</tr>
<tr>
<td>HellaSwag (Acc-Norm, 10-shot)</td>
<td>86.23</td>
<td>85.67</td>
<td>99.4</td>
</tr>
<tr>
<td>MMLU (Acc, 5-shot)</td>
<td>69.21</td>
<td>68.25</td>
<td>98.6</td>
</tr>
<tr>
<td>TruthfulQA (MC2, 0-shot)</td>
<td>60.31</td>
<td>60.55</td>
<td>100.4</td>
</tr>
<tr>
<td>Winogrande (Acc, 5-shot)</td>
<td>80.74</td>
<td>80.03</td>
<td>99.1</td>
</tr>
<tr>
<td><b>Average Score</b></td>
<td><b>73.50</b></td>
<td><b>73.10</b></td>
<td><b>99.5</b></td>
</tr>
</tbody>
</table>
|
RedHatAI/Qwen2.5-Coder-14B-Instruct-FP8-dynamic
|
RedHatAI
| 2025-09-23T20:11:52Z | 8 | 1 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"code",
"codeqwen",
"chat",
"qwen",
"qwen-coder",
"fp8",
"llm-compressor",
"compressed-tensors",
"vllm",
"conversational",
"en",
"base_model:Qwen/Qwen2.5-Coder-14B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-Coder-14B-Instruct",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2024-11-28T05:20:40Z |
---
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-Coder-14B-Instruct/blob/main/LICENSE
language:
- en
pipeline_tag: text-generation
library_name: transformers
tags:
- code
- codeqwen
- chat
- qwen
- qwen-coder
- fp8
- llm-compressor
- compressed-tensors
- vllm
base_model:
- Qwen/Qwen2.5-Coder-14B-Instruct
---
## Model Overview
- **Model Architecture:** Qwen2ForCausalLM
- **Input:** Text
- **Output:** Text
- **Model Optimizations:**
- **Weight quantization:** FP8
- **Activation quantization:** FP8
- **Release Date:** 11/28/2024
- **Version:** 1.0
- **Model Developers:** Red Hat
Quantized version of [Qwen/Qwen2.5-Coder-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-14B-Instruct).
### Model Optimizations
This model was obtained by quantizing the weights and activations of [Qwen/Qwen2.5-Coder-14B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-14B-Instruct) to FP8 data type.
This optimization reduces the number of bits per parameter from 16 to 8, reducing the disk size and GPU memory requirements by approximately 50%.
Only the weights and activations of the linear operators within transformers blocks are quantized.
## Deployment
### Use with vLLM
1. Initialize vLLM server:
```
vllm serve RedHatAI/Qwen2.5-Coder-14B-Instruct-FP8-dynamic
```
2. Send requests to the server:
```python
from openai import OpenAI
# Modify OpenAI's API key and API base to use vLLM's API server.
openai_api_key = "EMPTY"
openai_api_base = "http://<your-server-host>:8000/v1"
client = OpenAI(
api_key=openai_api_key,
base_url=openai_api_base,
)
model = "RedHatAI/Qwen2.5-Coder-14B-Instruct-FP8-dynamic"
messages = [
{"role": "user", "content": "Write a quick sort algorithm."},
]
outputs = client.chat.completions.create(
model=model,
messages=messages,
)
generated_text = outputs.choices[0].message.content
print(generated_text)
```
## Creation
This model was created with [llm-compressor](https://github.com/vllm-project/llm-compressor) by running the code snippet below.
<details>
<summary>Model Creation Code</summary>
```python
from llmcompressor.modifiers.quantization import QuantizationModifier
from llmcompressor.transformers import oneshot
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load model
model_stub = "Qwen/Qwen2.5-Coder-14B-Instruct"
model_name = model_stub.split("/")[-1]
model = AutoModelForCausalLM.from_pretrained(model_stub, dtype="auto")
tokenizer = AutoTokenizer.from_pretrained(model_stub)
# Configure the quantization algorithm and scheme
recipe = QuantizationModifier(
ignore=["lm_head"],
targets="Linear",
scheme="FP8_dynamic",
)
# Apply quantization
oneshot(
model=model,
recipe=recipe,
)
# Save to disk in compressed-tensors format
save_path = model_name + "-FP8-dynamic"
model.save_pretrained(save_path)
tokenizer.save_pretrained(save_path)
print(f"Model and tokenizer saved to: {save_path}")
```
</details>
|
ke7innn/gpt-neo-sarcasm-base-classification
|
ke7innn
| 2025-09-23T20:08:22Z | 0 | 1 | null |
[
"safetensors",
"gpt_neo",
"sarcasm",
"classification",
"gpt-neo",
"en",
"license:mit",
"4-bit",
"bitsandbytes",
"region:us"
] | null | 2025-09-23T20:02:17Z |
---
language: en
tags:
- sarcasm
- classification
- gpt-neo
license: mit
---
# GPT-Neo Sarcasm Classifier
This is a fine-tuned version of **EleutherAI/gpt-neo-1.3B** for detecting sarcasm in text.
It classifies input text as:
- `0` → Not Sarcastic
- `1` → Sarcastic
## Intended Use
This model can be used to detect sarcastic comments or headlines for applications such as:
- Social media analysis
- Sentiment enhancement pipelines
- Chatbots that need sarcasm awareness
It works best on text similar to the dataset it was fine-tuned on (`sarcasm.csv`).
## How to Use
You can load and use the model in Python as follows:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
base_model = "EleutherAI/gpt-neo-1.3B"
adapter_repo = "ke7innn/gpt-neo-sarcasm-base-classification"
tokenizer = AutoTokenizer.from_pretrained(base_model)
model = AutoModelForCausalLM.from_pretrained(base_model, device_map="auto")
model = PeftModel.from_pretrained(model, adapter_repo)
|
eilamc14/t5-large-text-simplification
|
eilamc14
| 2025-09-23T20:04:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"t5",
"text2text-generation",
"text-simplification",
"WikiLarge",
"en",
"dataset:eilamc14/wikilarge-clean",
"base_model:google-t5/t5-base",
"base_model:finetune:google-t5/t5-base",
"license:apache-2.0",
"model-index",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T15:09:57Z |
---
library_name: transformers
license: apache-2.0
base_model: t5-base
datasets:
- eilamc14/wikilarge-clean
language:
- en
tags:
- text-simplification
- WikiLarge
model-index:
- name: t5-large-text-simplification
results:
- task:
type: text2text-generation
name: Text Simplification
dataset:
name: ASSET
type: facebook/asset
url: https://huggingface.co/datasets/facebook/asset
split: test
metrics:
- type: SARI
value: 35.41
- type: FKGL
value: 8.54
- type: BERTScore
value: 87.04
- type: LENS
value: 59.81
- type: Identical ratio
value: 0.22
- type: Identical ratio (ci)
value: 0.22
- task:
type: text2text-generation
name: Text Simplification
dataset:
name: MEDEASI
type: cbasu/Med-EASi
url: https://huggingface.co/datasets/cbasu/Med-EASi
split: test
metrics:
- type: SARI
value: 33.22
- type: FKGL
value: 10.53
- type: BERTScore
value: 44.87
- type: LENS
value: 48.53
- type: Identical ratio
value: 0.20
- type: Identical ratio (ci)
value: 0.20
- task:
type: text2text-generation
name: Text Simplification
dataset:
name: OneStopEnglish
type: OneStopEnglish
url: https://github.com/nishkalavallabhi/OneStopEnglishCorpus
split: advanced→elementary
metrics:
- type: SARI
value: 39.43
- type: FKGL
value: 8.16
- type: BERTScore
value: 78.24
- type: LENS
value: 60.31
- type: Identical ratio
value: 0.32
- type: Identical ratio (ci)
value: 0.32
---
# Model Card for Model ID
This is one of the models fine-tuned on text simplification for [Simplify This](https://github.com/eilamc14/Simplify-This) project.
## Model Details
### Model Description
Fine-tuned **sequence-to-sequence (encoder–decoder) Transformer** for **English text simplification**.
Trained on the dataset **`eilamc14/wikilarge-clean`** (cleaned WikiLarge-style pairs).
- **Model type:** Seq2Seq Transformer (encoder–decoder)
- **Language (NLP):** English
- **License:** `apache-2.0`
- **Finetuned from model:** `t5-large`
### Model Sources
- **Repository (code):** https://github.com/eilamc14/Simplify-This
- **Dataset:** https://huggingface.co/datasets/eilamc14/wikilarge-clean
- **Paper [optional]:** —
- **Demo [optional]:** —
## Uses
### Direct Use
The model is intended for **English text simplification**.
- **Input format:** `Simplify: <complex sentence>`
- **Output:** `<simplified sentence>`
**Typical uses**
- Research on automatic text simplification
- Benchmarking against other simplification systems
- Demos/prototypes that require simpler English rewrites
### Downstream Use
This repository already contains a **fine-tuned** model specialized for text simplification.
Further fine-tuning is **optional** and mainly relevant when:
- Adapting to a markedly different domain (e.g., medical/legal/news)
- Addressing specific failure modes (e.g., over/under-simplification, factual drops)
- Distilling/quantizing for deployment constraints
When fine-tuning further, keep the same input convention: `Simplify: <...>`.
### Out-of-Scope Use
Not intended for:
- Tasks unrelated to simplification (dialogue, translation etc.)
- Production use without additional safety filtering (no toxicity/bias mitigation)
- Languages other than English
- High-stakes settings (legal/medical advice, safety-critical decisions)
## Bias, Risks, and Limitations
The model was trained on **Wikipedia and Simple English Wikipedia** alignments (via WikiLarge).
As a result, it inherits the characteristics and limitations of this data:
- **Domain bias:** Simplifications may reflect encyclopedic style; performance may degrade on informal, technical, or domain-specific text (e.g., medical/legal/news).
- **Content bias:** Wikipedia content itself contains biases in coverage, cultural perspective, and phrasing. Simplified outputs may reflect or amplify these.
- **Simplification quality:** The model may:
- Over-simplify (drop important details)
- Under-simplify (retain complex phrasing)
- Produce ungrammatical or awkward rephrasings
- **Language limitation:** Only suitable for English. Applying to other languages is unsupported.
- **Safety limitation:** The model has not been aligned to avoid toxic, biased, or harmful content. If the input text contains such content, the output may reproduce or modify it without safeguards.
### Recommendations
- **Evaluation required:** Always evaluate the model in the target domain before deployment. Benchmark simplification quality (e.g., with SARI, FKGL, BERTScore, LENS, human evaluation).
- **Human oversight:** Use human-in-the-loop review for applications where meaning preservation is critical (education, accessibility tools, etc.).
- **Attribution:** Preserve source attribution where required (Wikipedia → CC BY-SA).
- **Not for high-stakes use:** Avoid legal, medical, or safety-critical applications without extensive validation and domain adaptation.
## How to Get Started with the Model
Load the model and tokenizer directly from the Hugging Face Hub:
```python
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
model_id = "eilamc14/t5-large-text-simplification"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForSeq2SeqLM.from_pretrained(model_id)
# Example input
PREFIX = "Simplify: "
text = "The committee deemed the proposal unnecessarily complicated."
# Tokenize and generate
inputs = tokenizer(PREFIX+text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=64, num_beams=4)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Training Details
### Training Data
[WikiLarge-clean](https://huggingface.co/datasets/eilamc14/wikilarge-clean) Dataset
### Training Procedure
- **Hardware:** NVIDIA L4 GPU on Google Colab
- **Objective:** Standard sequence-to-sequence cross-entropy loss
- **Training type:** Full fine-tuning of all parameters (no LoRA/PEFT used)
- **Batching:** Dynamic padding with Hugging Face `Trainer` / PyTorch DataLoader
- **Evaluation:** Monitored on the `validation` split with metrics (SARI and identical_ratio)
- **Stopping criteria:** Early stopping CallBack based on validation performance
#### Preprocessing
The dataset was preprocessed by prefixing each source sentence with **"Simplify: "** and tokenizing both the source (inputs) and target (labels).
#### Memory & Checkpointing
To reduce VRAM during training, gradient checkpointing was enabled and the KV cache was disabled:
```python
model.config.use_cache = False # required when using gradient checkpointing
model.gradient_checkpointing_enable() # saves memory at the cost of extra compute
```
**Notes**
- Disabling `use_cache` avoids warnings/conflicts with gradient checkpointing and reduces memory usage in the forward pass.
- Gradient checkpointing trades **GPU memory ↓** for **training speed ↓** (extra recomputation).
- For **inference/evaluation**, re-enable the cache for faster generation:
```python
model.config.use_cache = True
```
#### Training Hyperparameters
The models were trained with Hugging Face `Seq2SeqTrainingArguments`.
Hyperparameters varied slightly across models and runs to optimize, and full logs (batch size, steps, exact LR schedule) were not preserved.
Below are the **typical defaults** used:
- **Epochs:** 5
- **Evaluation strategy:** every 300 steps
- **Save strategy:** every 300 steps (keep best model, `eval_loss` as criterion)
- **Learning rate:** ~3e-5
- **Batch size:** ~8-64 , depends on model size
- **Optimizer:** `adamw_torch_fused`
- **Precision:** bf16
- **Generation config (during eval):** `max_length=128`, `num_beams=4`, `predict_with_generate=True`
- **Other settings:**
- Weight decay: 0.01
- Label smoothing: 0.1
- Warmup ratio: 0.1
- Max grad norm: 0.5
- Dataloader workers: 8 (L4 GPU)
> Because hyperparameters were adjusted between runs and not all were logged, exact reproduction may differ slightly.
## Evaluation
### Testing Data
- [**ASSET**](https://huggingface.co/datasets/facebook/asset) (test subset)
- [**MEDEASI**](https://huggingface.co/datasets/cbasu/Med-EASi) (test subset)
- [**OneStopEnglish**](https://github.com/nishkalavallabhi/OneStopEnglishCorpus) (advanced → elementary)
### Metrics
- **Identical ratio** — share of outputs identical to the source, both normalized by basic, language-agnostic: strip, NFKC, collapse spaces
- **Identical ratio (ci)** — case insensitive identical ratio
- **SARI** — main simplification metric (higher is better)
- **FKGL** — readability grade level (lower is simpler)
- **BERTScore (F1)** — semantic similarity (higher is better)
- **LENS** — composite simplification quality score (higher is better)
### Generation Arguments
```python
gen_args = dict(
max_new_tokens=64,
num_beams=4,
length_penalty=1.0,
no_repeat_ngram_size=3,
early_stopping=True,
do_sample=False,
)
```
### Results
| Dataset | Identical ratio | Identical ratio (ci) | SARI | FKGL | BERTScore | LENS |
|--------------------|----------------:|---------------------:|------:|-----:|----------:|------:|
| **ASSET** | 0.22 | 0.22 | 35.41 | 8.54 | 87.04 | 59.81 |
| **MEDEASI** | 0.20 | 0.20 | 33.22 | 10.53| 44.87 | 48.53 |
| **OneStopEnglish** | 0.32 | 0.32 | 39.43 | 8.16 | 78.24 | 60.31 |
## Environmental Impact
- **Hardware Type:** Single NVIDIA L4 GPU (Google Colab)
- **Hours used:** Approx. 5–10
- **Cloud Provider:** Google Cloud (via Colab)
- **Compute Region:** Unknown (Google Colab dynamic allocation)
- **Carbon Emitted:** Estimated to be very low (< a few kg CO₂eq), since training was limited to a single GPU for a small number of hours.
## Citation
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
|
FlippyDora/all-MiniLM-L6-v2-imdb-sentiment
|
FlippyDora
| 2025-09-23T20:04:32Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-23T20:04:28Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
nruigrok/TK2025_talkshows
|
nruigrok
| 2025-09-23T20:03:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-23T19:52:05Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Lordbask/blockassist
|
Lordbask
| 2025-09-23T19:49:12Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"restless reptilian caterpillar",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-12T07:26:11Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- restless reptilian caterpillar
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
winnieyangwannan/evwc_Qwen2.5-VL-7B-Instruct_mlp-down_pnas_layer_18_6_all_37_0.001_12800_15
|
winnieyangwannan
| 2025-09-23T19:44:15Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_5_vl",
"image-to-text",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-to-text
| 2025-09-23T19:42:33Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
prj-beatrice/japanese-hubert-base-phoneme-ctc-v3
|
prj-beatrice
| 2025-09-23T19:39:53Z | 25 | 1 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"hubert",
"automatic-speech-recognition",
"ja",
"dataset:reazon-research/reazonspeech",
"base_model:rinna/japanese-hubert-base",
"base_model:finetune:rinna/japanese-hubert-base",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2025-09-21T10:11:04Z |
---
library_name: transformers
license: apache-2.0
language: ja
datasets: reazon-research/reazonspeech
base_model: rinna/japanese-hubert-base
---
# japanese-hubert-base-phoneme-ctc-v3
rinna/japanese-hubert-base を CTC での日本語音素認識にファインチューニングしたモデルです。
## v2 からの変更点
* 漢字仮名交じりの書き起こしに対して MeCab (fugashi) で N-best を出力し、それぞれに対して pyopenjtalk-plus による後処理を行った後、読みに対して[前バージョンのモデル](https://huggingface.co/prj-beatrice/japanese-hubert-base-phoneme-ctc-v2)で計算した CTC loss と MeCab のコストとの重み付き和が最小となるものを正解ラベルとして扱う
* 学習データから除外する文の条件を次のように変更
* 「9人」「9人」「九人」「今シーズン」「今大会」のいずれかが含まれた文は学習データから除外
* 「十」「0」「0」のいずれかが含まれた文のうち、pyopenjtalk-plus で推定された発音に「ジュッ」が含まれるものは学習データから除外 (v2 から継続)
* 「日本」「方」「明日」の除外は実施しない (v2 から変更)
* 音素「ty」を追加
その他の条件は [v2](https://huggingface.co/prj-beatrice/japanese-hubert-base-phoneme-ctc-v2) と同じです。
|
bhaveshsoni0023/gpt-oss-20b-multilingual-reasoner-1
|
bhaveshsoni0023
| 2025-09-23T19:36:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:openai/gpt-oss-20b",
"base_model:finetune:openai/gpt-oss-20b",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T17:15:08Z |
---
base_model: openai/gpt-oss-20b
library_name: transformers
model_name: gpt-oss-20b-multilingual-reasoner-1
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gpt-oss-20b-multilingual-reasoner-1
This model is a fine-tuned version of [openai/gpt-oss-20b](https://huggingface.co/openai/gpt-oss-20b).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="bhaveshsoni0023/gpt-oss-20b-multilingual-reasoner-1", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.2
- Pytorch: 2.8.0.dev20250319+cu128
- Datasets: 4.1.1
- Tokenizers: 0.22.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
amitkp621/AR-7-lora
|
amitkp621
| 2025-09-23T19:36:14Z | 0 | 0 |
diffusers
|
[
"diffusers",
"image-to-image",
"flux",
"lora",
"template:sd-lora",
"ai-toolkit",
"base_model:black-forest-labs/FLUX.1-Kontext-dev",
"base_model:adapter:black-forest-labs/FLUX.1-Kontext-dev",
"license:creativeml-openrail-m",
"region:us"
] |
image-to-image
| 2025-09-23T19:36:11Z |
---
tags:
- image-to-image
- flux
- lora
- diffusers
- template:sd-lora
- ai-toolkit
base_model: black-forest-labs/FLUX.1-Kontext-dev
license: creativeml-openrail-m
inference:
parameters:
width: 1536
height: 1024
instance_prompt: tryon
---
# AR-7-lora
Model trained with [AI Toolkit by Ostris](https://github.com/ostris/ai-toolkit)
## Trigger words
You should use `tryon` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, etc.
Weights for this model are available in Safetensors format.
[Download](amitkp621/AR-7-lora/tree/main) them in the Files & versions tab.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-Kontext-dev', torch_dtype=torch.bfloat16).to('cuda')
pipeline.load_lora_weights('amitkp621/AR-7-lora', weight_name='AR-7.safetensors')
image = pipeline('tryon').images[0]
image.save("my_image.png")
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
amd/Zebra-Llama-8B-16MLA-16Mamba-SFT
|
amd
| 2025-09-23T19:32:18Z | 4 | 0 | null |
[
"safetensors",
"llama",
"alignment-handbook",
"generated_from_trainer",
"dataset:JunxiongWang/sftdatasetv3",
"arxiv:2505.17272",
"arxiv:2503.11132",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-06-13T00:55:06Z |
---
base_model:
- meta-llama/Llama-3.1-8B-Instruct
datasets:
- JunxiongWang/sftdatasetv3
model-index:
- name: Zebra-Llama-8B-16MLA-16Mamba-SFT
results: []
tags:
- alignment-handbook
- generated_from_trainer
license: apache-2.0
---
# Zebra-Llama: Towards Extremely Efficient Hybrid Models
Zebra-Llama is a family of hybrid large language models (LLMs) proposed by AMD that composes Multi-head Latent Attention (MLA) and Mamba2 for KV cache compression and computational efficiency.
Thus combination achieves Transformer-level accuracy with near-State Space Model (SSM) efficiency. While standard Transformers are limited by the quadratic complexity of self-attention and the large memory footprint of their key-value (KV) cache, Zebra-Llama offers a practical and scalable solution.
This model, `Zebra-Llama-8B-16MLA-16Mamba-SFT`, is created by efficiently adapting the pre-trained `Llama-3.1-8B-Instruct` model conducted post-training on AMD Instinct™ MI300X GPUs. This training approach bypasses the need for costly pre-training from scratch.
<div align="center">
<img src="comparison.png" width="570" height="380" style="object-fit: contain;"/>
<em><b>Figure 1:</b> Comparing 8B-scale models on average LM Harness score vs. KV cache size. Zebra-Llama (green) matches or exceeds baselines with smaller KV cache and fewer training tokens. Circle and square sizes indicate training tokens (billions for post-training, trillions for pre-training).</em>
</div>
## Key Takeaways
- Announcing Zebra-Llama, a family of highly efficient 1B, 3B, and 8B hybrid models created by post-training adaptation of existing state-of-the-art Transformers.
- Extreme KV Cache Compression: Zebra-Llama dramatically reduces the KV cache size to 2%-4% of the original Llama model while preserving 100% of its average zero-shot performance on LM Harness tasks.
- Efficient Hybrid Architecture: Zebra-Llama strategically combines Multi-head Latent Attention (MLA) layers, which compress the KV cache, and Mamba2 (SSM) layers, which eliminate the KV cache entirely, to balance memory usage and performance.
- Novel Post-Training Pipeline: Zebra-Llama employs an efficient post-training pipeline featuring refined weight initialization, Intermediate Layer Distillation (ILD) for knowledge transfer, and a sensitivity-aware strategy (SMART) for optimal hybrid composition.
## Model Composition Pipeline
The Zebra-Llama models are not trained from scratch. Instead, they are composed from powerful pre-trained Transformers through a lightweight and efficient pipeline. The creation of this model followed these stages:
| Stage | Action | Description |
|-------------------|---------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 1. Base Model | Llama-3.1-8B-Instruct | The starting point is a high-quality, pre-trained Transformer model. |
| 2. Initialization | Structured Weight Mapping | Pure Mamba2 and MLA models are initialized from the base model's weights using structured mapping techniques (SVD for MLA, reinterpretation for Mamba2). |
| 3. Refinement | Intermediate Layer Distillation (ILD) | The internal representations of the Mamba2 and MLA models are aligned with the base model's layers on a small dataset to ensure a strong starting point. |
| 4. Composition | SMART Layer Selection | A hybrid architecture is composed using the SMART (Sensitivity Measure-Aware Replacement of Transformer layers) strategy to optimally place each layer type. |
| 5. SFT | End-to-End Knowledge Distillation | The composed hybrid model is fine-tuned via knowledge distillation, using an 8B model as a teacher to transfer rich, pre-trained knowledge. |
| 6. Alignment | Direct Preference Optimization (DPO) | In the final stage, DPO is used to align the model's preferences, with the distilled student model itself serving as the reference model for stability. |
## Training Data
|Stage | Dataset | License |
|-----------|---------------------------------------------------------------------------|------------------------|
| ILD/SFT | https://huggingface.co/datasets/teknium/OpenHermes-2.5 | Refer source materials |
| ILD/SFT | https://huggingface.co/datasets/tomg-group-umd/GenQA | CC BY-NC 4.0 |
| ILD/SFT | https://huggingface.co/datasets/BAAI/Infinity-Instruct | CC BY-SA 4.0 |
| DPO | https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized | MIT |
| DPO | https://huggingface.co/datasets/HuggingFaceH4/orca_dpo_pairs | MIT |
| DPO | https://huggingface.co/datasets/JunxiongWang/llama3-ultrafeedback-armorm | MIT |
## Getting Started
### Installation
```
git clone https://github.com/AMD-AIG-AIMA/AMD-Hybrid-Models.git
```
Then follow the installation instruction in `AMD-AIG-AIMA/AMD-Hybrid-Models` repo.
### Example Usage
Once the installation completed, we can try the following code for a quick test
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from hybrid.hybrid_wrapper import HybridModelWrapper
checkpoint = "amd/Zebra-Llama-8B-16MLA-16Mamba-SFT"
model = HybridModelWrapper.from_pretrained(checkpoint, torch_dtype=torch.bfloat16).cuda()
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model.eval()
# Format the prompt using the chat template
prompt = [{"role": "user", "content": "What are the benefits of hybrid language models?"}]
input_ids = tokenizer.apply_chat_template(
prompt,
add_generation_prompt=True,
return_tensors='pt'
).cuda()
# Generate a response
tokens = model.generate(
input_ids,
max_new_tokens=256,
temperature=0.7,
do_sample=True,
eos_token_id=tokenizer.eos_token_id
)
print(tokenizer.decode(tokens[0], skip_special_tokens=False))
```
### Model details
| Model | KV Size | Param | Index of MLA layers | r<sub>kv</sub>| r<sub>q</sub> | d<sub>rope</sub> | d<sub>nope</sub> |
|-------|--------:|------:|-------------------:|------:|------:|---------:|---------:|
|Llama-3.1-8B-Instruct | 100% | 8.03B | - | -| -| -| -|
|Zebra-Llama-8B-8MLA-24Mamba | 2.73% | 8.38B | [0,4,8,12,16,20,25,30] | 160 | 2048 | 64 | 64 |
|Zebra-Llama-8B-16MLA-16Mamba | 5.47% | 8.19B | [0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30] | 160 | 2048 | 64 | 64 |
### Benchmark results
Zebra-Llama was evaluated on the Language Model Harness benchmark for zero-shot tasks and compared against its base model and other post-training methods. The results demonstrate that Zebra-Llama provides a superior balance of performance and efficiency.
| Tasks | Metric | Llama-3.1-8B-Instruct | Zebra-Llama-8B-8MLA-24M2-SFT | Zebra-Llama-8B-8MLA-24M2-DPO | Zebra-Llama-8B-16MLA-16M2-SFT | Zebra-Llama-8B-16MLA-16M2-DPO |
|-------------------|----------|------------------: |----: |----:| ----:|----:|
| arc_challenge | acc | 0.5162±0.0146 | 0.4923±0.0146 | 0.5648±0.0145 | 0.4898±0.0146 | 0.5717±0.0145 |
| | acc_norm | 0.5512±0.0145 | 0.5205±0.0146 | 0.5887±0.0144 | 0.5213±0.0146 | 0.5862±0.0144 |
| arc_easy | acc | 0.8186±0.0079 | 0.8068±0.0081 | 0.8274±0.0078 | 0.803±0.0082 | 0.8312±0.0077 |
| | acc_norm | 0.7963±0.0083 | 0.7706±0.0086 | 0.7917±0.0083 | 0.7689±0.0086 | 0.7837±0.0084 |
| hellaswag | acc | 0.5913±0.0049 | 0.5496±0.005 | 0.5906±0.0049 | 0.5569±0.005 | 0.5953±0.0049 |
| | acc_norm | 0.7919±0.0041 | 0.7427±0.0044 | 0.786±0.0041 | 0.7491±0.0043 | 0.7927±0.004 |
| mmlu | acc | 0.6814±0.1113 | 0.5567±0.1157 | 0.546±0.1124 | 0.5882±0.1212 | 0.5817±0.1055 |
| - humanities | acc | 0.6436±0.0906 | 0.5048±0.1131 | 0.4944±0.1132 | 0.535±0.1277 | 0.5596±0.09 |
| - other | acc | 0.7441±0.1097 | 0.6283±0.0983 | 0.6061±0.0982 | 0.6521±0.1 | 0.6283±0.0995 |
| - social_sciences | acc | 0.7709±0.0621 | 0.6487±0.0733 | 0.6344±0.0791 | 0.6887±0.0752 | 0.6581±0.0757 |
| - stem | acc | 0.5886±0.1252 | 0.4738±0.1179 | 0.4776±0.1094 | 0.5065±0.1161 | 0.4941±0.1154 |
| openbookqa | acc | 0.334±0.0211 | 0.32±0.0209 | 0.348±0.0213 | 0.32±0.0209 | 0.364±0.0215 |
| | acc_norm | 0.43±0.0222 | 0.418±0.0221 | 0.436±0.0222 | 0.43±0.0222 | 0.434±0.0222 |
| piqa | acc | 0.7992±0.0093 | 0.7878±0.0095 | 0.8025±0.0093 | 0.7933±0.0094 | 0.7992±0.0093 |
| | acc_norm | 0.8085±0.0092 | 0.7884±0.0095 | 0.7943±0.0094 | 0.7927±0.0095 | 0.8003±0.0093 |
| pubmedqa | acc | 0.756±0.0192 | 0.688±0.0207 | 0.718±0.0201 | 0.728±0.0199 | 0.728±0.0199 |
| race | acc | 0.4459±0.0154 | 0.4077±0.0152 | 0.4622±0.0154 | 0.4239±0.0153 | 0.4928±0.0155 |
| winogrande | acc | 0.7427±0.0123 | 0.6969±0.0129 | 0.7245±0.0126 | 0.7032±0.0128 | 0.7261±0.0125 |
## Conclusion
Zebra-Llama demonstrates a practical and scalable framework for composing highly efficient hybrid models from existing pre-trained Transformers. By intelligently combining MLA and Mamba2 layers, this approach drastically reduces memory requirements and improves inference throughput while preserving the strong capabilities of the original model. This work highlights the viability of post-training hybridization as a cost-effective and environmentally sustainable alternative to full retraining, paving the way for the deployment of powerful LLMs in resource-constrained environments.
## Bias, Risks, and Limitations
- This model is a research artifact and has not been evaluated for safety in production use cases.
- The model's performance is dependent on the quality of its pre-trained base model and the teacher model used during distillation. Its capabilities and biases are inherited from these sources.
- The model may generate content that is factually inaccurate, biased, or otherwise objectionable. Users should be aware of these risks and implement appropriate safeguards for their applications.
- One limitation of this work is the reliance on a strong teacher model for knowledge transfer, which may not always be available. Distillation from a teacher also adds to the resource requirements during the post-training phase.
## Citation
If you find this model useful, please consider citing the original paper:
```
@article{yang2025zebra,
title={Zebra-Llama: Towards Extremely Efficient Hybrid Models},
author={Yang, Mingyu and Rezagholizadeh, Mehdi and Li, Guihong and Appia, Vikram and Barsoum, Emad},
journal={arXiv preprint arXiv:2505.17272},
year={2025}
}
@article{li2025x,
title={X-ecomla: Upcycling pre-trained attention into mla for efficient and extreme kv compression},
author={Li, Guihong and Rezagholizadeh, Mehdi and Yang, Mingyu and Appia, Vikram and Barsoum, Emad},
journal={arXiv preprint arXiv:2503.11132},
year={2025}
}
```
|
amd/Zebra-Llama-8B-8MLA-24Mamba-SFT
|
amd
| 2025-09-23T19:31:55Z | 4 | 0 | null |
[
"safetensors",
"llama",
"alignment-handbook",
"generated_from_trainer",
"dataset:JunxiongWang/sftdatasetv3",
"arxiv:2505.17272",
"arxiv:2503.11132",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"license:apache-2.0",
"region:us"
] | null | 2025-06-11T23:23:58Z |
---
base_model:
- meta-llama/Llama-3.1-8B-Instruct
datasets:
- JunxiongWang/sftdatasetv3
model-index:
- name: Zebra-Llama-8B-8MLA-24Mamba-SFT
results: []
tags:
- alignment-handbook
- generated_from_trainer
license: apache-2.0
---
# Zebra-Llama: Towards Extremely Efficient Hybrid Models
Zebra-Llama is a family of hybrid large language models (LLMs) proposed by AMD that composes Multi-head Latent Attention (MLA) and Mamba2 for KV cache compression and computational efficiency.
Thus combination achieves Transformer-level accuracy with near-State Space Model (SSM) efficiency. While standard Transformers are limited by the quadratic complexity of self-attention and the large memory footprint of their key-value (KV) cache, Zebra-Llama offers a practical and scalable solution.
This model, `Zebra-Llama-8B-8MLA-24Mamba-SFT`, is created by efficiently adapting the pre-trained `Llama-3.1-8B-Instruct` model conducted post-training on AMD Instinct™ MI300X GPUs. This training approach bypasses the need for costly pre-training from scratch.
<div align="center">
<img src="comparison.png" width="570" height="380" style="object-fit: contain;"/>
<em><b>Figure 1:</b> Comparing 8B-scale models on average LM Harness score vs. KV cache size. Zebra-Llama (green) matches or exceeds baselines with smaller KV cache and fewer training tokens. Circle and square sizes indicate training tokens (billions for post-training, trillions for pre-training).</em>
</div>
## Key Takeaways
- Announcing Zebra-Llama, a family of highly efficient 1B, 3B, and 8B hybrid models created by post-training adaptation of existing state-of-the-art Transformers.
- Extreme KV Cache Compression: Zebra-Llama dramatically reduces the KV cache size to 2%-4% of the original Llama model while preserving 100% of its average zero-shot performance on LM Harness tasks.
- Efficient Hybrid Architecture: Zebra-Llama strategically combines Multi-head Latent Attention (MLA) layers, which compress the KV cache, and Mamba2 (SSM) layers, which eliminate the KV cache entirely, to balance memory usage and performance.
- Novel Post-Training Pipeline: Zebra-Llama employs an efficient post-training pipeline featuring refined weight initialization, Intermediate Layer Distillation (ILD) for knowledge transfer, and a sensitivity-aware strategy (SMART) for optimal hybrid composition.
## Model Composition Pipeline
The Zebra-Llama models are not trained from scratch. Instead, they are composed from powerful pre-trained Transformers through a lightweight and efficient pipeline. The creation of this model followed these stages:
| Stage | Action | Description |
|-------------------|---------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 1. Base Model | Llama-3.1-8B-Instruct | The starting point is a high-quality, pre-trained Transformer model. |
| 2. Initialization | Structured Weight Mapping | Pure Mamba2 and MLA models are initialized from the base model's weights using structured mapping techniques (SVD for MLA, reinterpretation for Mamba2). |
| 3. Refinement | Intermediate Layer Distillation (ILD) | The internal representations of the Mamba2 and MLA models are aligned with the base model's layers on a small dataset to ensure a strong starting point. |
| 4. Composition | SMART Layer Selection | A hybrid architecture is composed using the SMART (Sensitivity Measure-Aware Replacement of Transformer layers) strategy to optimally place each layer type. |
| 5. SFT | End-to-End Knowledge Distillation | The composed hybrid model is fine-tuned via knowledge distillation, using an 8B model as a teacher to transfer rich, pre-trained knowledge. |
| 6. Alignment | Direct Preference Optimization (DPO) | In the final stage, DPO is used to align the model's preferences, with the distilled student model itself serving as the reference model for stability. |
## Training Data
|Stage | Dataset | License |
|-----------|---------------------------------------------------------------------------|------------------------|
| ILD/SFT | https://huggingface.co/datasets/teknium/OpenHermes-2.5 | Refer source materials |
| ILD/SFT | https://huggingface.co/datasets/tomg-group-umd/GenQA | CC BY-NC 4.0 |
| ILD/SFT | https://huggingface.co/datasets/BAAI/Infinity-Instruct | CC BY-SA 4.0 |
| DPO | https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized | MIT |
| DPO | https://huggingface.co/datasets/HuggingFaceH4/orca_dpo_pairs | MIT |
| DPO | https://huggingface.co/datasets/JunxiongWang/llama3-ultrafeedback-armorm | MIT |
## Getting Started
### Installation
```
git clone https://github.com/AMD-AIG-AIMA/AMD-Hybrid-Models.git
```
Then follow the installation instruction in `AMD-AIG-AIMA/AMD-Hybrid-Models` repo.
### Example Usage
Once the installation completed, we can try the following code for a quick test
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from hybrid.hybrid_wrapper import HybridModelWrapper
checkpoint = "amd/Zebra-Llama-8B-8MLA-24Mamba-SFT"
model = HybridModelWrapper.from_pretrained(checkpoint, torch_dtype=torch.bfloat16).cuda()
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model.eval()
# Format the prompt using the chat template
prompt = [{"role": "user", "content": "What are the benefits of hybrid language models?"}]
input_ids = tokenizer.apply_chat_template(
prompt,
add_generation_prompt=True,
return_tensors='pt'
).cuda()
# Generate a response
tokens = model.generate(
input_ids,
max_new_tokens=256,
temperature=0.7,
do_sample=True,
eos_token_id=tokenizer.eos_token_id
)
print(tokenizer.decode(tokens[0], skip_special_tokens=False))
```
### Model details
| Model | KV Size | Param | Index of MLA layers | r<sub>kv</sub>| r<sub>q</sub> | d<sub>rope</sub> | d<sub>nope</sub> |
|-------|--------:|------:|-------------------:|------:|------:|---------:|---------:|
|Llama-3.1-8B-Instruct | 100% | 8.03B | - | -| -| -| -|
|Zebra-Llama-8B-8MLA-24Mamba | 2.73% | 8.38B | [0,4,8,12,16,20,25,30] | 160 | 2048 | 64 | 64 |
|Zebra-Llama-8B-16MLA-16Mamba | 5.47% | 8.19B | [0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30] | 160 | 2048 | 64 | 64 |
### Benchmark results
Zebra-Llama was evaluated on the Language Model Harness benchmark for zero-shot tasks and compared against its base model and other post-training methods. The results demonstrate that Zebra-Llama provides a superior balance of performance and efficiency.
| Tasks | Metric | Llama-3.1-8B-Instruct | Zebra-Llama-8B-8MLA-24M2-SFT | Zebra-Llama-8B-8MLA-24M2-DPO | Zebra-Llama-8B-16MLA-16M2-SFT | Zebra-Llama-8B-16MLA-16M2-DPO |
|-------------------|----------|------------------: |----: |----:| ----:|----:|
| arc_challenge | acc | 0.5162±0.0146 | 0.4923±0.0146 | 0.5648±0.0145 | 0.4898±0.0146 | 0.5717±0.0145 |
| | acc_norm | 0.5512±0.0145 | 0.5205±0.0146 | 0.5887±0.0144 | 0.5213±0.0146 | 0.5862±0.0144 |
| arc_easy | acc | 0.8186±0.0079 | 0.8068±0.0081 | 0.8274±0.0078 | 0.803±0.0082 | 0.8312±0.0077 |
| | acc_norm | 0.7963±0.0083 | 0.7706±0.0086 | 0.7917±0.0083 | 0.7689±0.0086 | 0.7837±0.0084 |
| hellaswag | acc | 0.5913±0.0049 | 0.5496±0.005 | 0.5906±0.0049 | 0.5569±0.005 | 0.5953±0.0049 |
| | acc_norm | 0.7919±0.0041 | 0.7427±0.0044 | 0.786±0.0041 | 0.7491±0.0043 | 0.7927±0.004 |
| mmlu | acc | 0.6814±0.1113 | 0.5567±0.1157 | 0.546±0.1124 | 0.5882±0.1212 | 0.5817±0.1055 |
| - humanities | acc | 0.6436±0.0906 | 0.5048±0.1131 | 0.4944±0.1132 | 0.535±0.1277 | 0.5596±0.09 |
| - other | acc | 0.7441±0.1097 | 0.6283±0.0983 | 0.6061±0.0982 | 0.6521±0.1 | 0.6283±0.0995 |
| - social_sciences | acc | 0.7709±0.0621 | 0.6487±0.0733 | 0.6344±0.0791 | 0.6887±0.0752 | 0.6581±0.0757 |
| - stem | acc | 0.5886±0.1252 | 0.4738±0.1179 | 0.4776±0.1094 | 0.5065±0.1161 | 0.4941±0.1154 |
| openbookqa | acc | 0.334±0.0211 | 0.32±0.0209 | 0.348±0.0213 | 0.32±0.0209 | 0.364±0.0215 |
| | acc_norm | 0.43±0.0222 | 0.418±0.0221 | 0.436±0.0222 | 0.43±0.0222 | 0.434±0.0222 |
| piqa | acc | 0.7992±0.0093 | 0.7878±0.0095 | 0.8025±0.0093 | 0.7933±0.0094 | 0.7992±0.0093 |
| | acc_norm | 0.8085±0.0092 | 0.7884±0.0095 | 0.7943±0.0094 | 0.7927±0.0095 | 0.8003±0.0093 |
| pubmedqa | acc | 0.756±0.0192 | 0.688±0.0207 | 0.718±0.0201 | 0.728±0.0199 | 0.728±0.0199 |
| race | acc | 0.4459±0.0154 | 0.4077±0.0152 | 0.4622±0.0154 | 0.4239±0.0153 | 0.4928±0.0155 |
| winogrande | acc | 0.7427±0.0123 | 0.6969±0.0129 | 0.7245±0.0126 | 0.7032±0.0128 | 0.7261±0.0125 |
## Conclusion
Zebra-Llama demonstrates a practical and scalable framework for composing highly efficient hybrid models from existing pre-trained Transformers. By intelligently combining MLA and Mamba2 layers, this approach drastically reduces memory requirements and improves inference throughput while preserving the strong capabilities of the original model. This work highlights the viability of post-training hybridization as a cost-effective and environmentally sustainable alternative to full retraining, paving the way for the deployment of powerful LLMs in resource-constrained environments.
## Bias, Risks, and Limitations
- This model is a research artifact and has not been evaluated for safety in production use cases.
- The model's performance is dependent on the quality of its pre-trained base model and the teacher model used during distillation. Its capabilities and biases are inherited from these sources.
- The model may generate content that is factually inaccurate, biased, or otherwise objectionable. Users should be aware of these risks and implement appropriate safeguards for their applications.
- One limitation of this work is the reliance on a strong teacher model for knowledge transfer, which may not always be available. Distillation from a teacher also adds to the resource requirements during the post-training phase.
## Citation
If you find this model useful, please consider citing the original paper:
```
@article{yang2025zebra,
title={Zebra-Llama: Towards Extremely Efficient Hybrid Models},
author={Yang, Mingyu and Rezagholizadeh, Mehdi and Li, Guihong and Appia, Vikram and Barsoum, Emad},
journal={arXiv preprint arXiv:2505.17272},
year={2025}
}
@article{li2025x,
title={X-ecomla: Upcycling pre-trained attention into mla for efficient and extreme kv compression},
author={Li, Guihong and Rezagholizadeh, Mehdi and Yang, Mingyu and Appia, Vikram and Barsoum, Emad},
journal={arXiv preprint arXiv:2503.11132},
year={2025}
}
```
|
amd/Zebra-Llama-8B-8MLA-24Mamba-DPO
|
amd
| 2025-09-23T19:31:45Z | 21 | 0 | null |
[
"safetensors",
"llama",
"alignment-handbook",
"generated_from_trainer",
"dataset:HuggingFaceH4/ultrafeedback_binarized",
"dataset:HuggingFaceH4/orca_dpo_pairs",
"dataset:JunxiongWang/llama3-ultrafeedback-armorm",
"arxiv:2505.17272",
"arxiv:2503.11132",
"base_model:amd/Zebra-Llama-8B-8MLA-24Mamba-SFT",
"base_model:finetune:amd/Zebra-Llama-8B-8MLA-24Mamba-SFT",
"license:apache-2.0",
"region:us"
] | null | 2025-06-11T23:41:14Z |
---
base_model:
- amd/Zebra-Llama-8B-8MLA-24Mamba-SFT
datasets:
- HuggingFaceH4/ultrafeedback_binarized
- HuggingFaceH4/orca_dpo_pairs
- JunxiongWang/llama3-ultrafeedback-armorm
model-index:
- name: Zebra-Llama-8B-8MLA-24Mamba-DPO
results: []
tags:
- alignment-handbook
- generated_from_trainer
license: apache-2.0
---
# Zebra-Llama: Towards Extremely Efficient Hybrid Models
Zebra-Llama is a family of hybrid large language models (LLMs) proposed by AMD that composes Multi-head Latent Attention (MLA) and Mamba2 for KV cache compression and computational efficiency.
Thus combination achieves Transformer-level accuracy with near-State Space Model (SSM) efficiency. While standard Transformers are limited by the quadratic complexity of self-attention and the large memory footprint of their key-value (KV) cache, Zebra-Llama offers a practical and scalable solution.
This model, `Zebra-Llama-8B-8MLA-24Mamba-DPO`, is created by efficiently adapting the pre-trained `Llama-3.1-8B-Instruct` model conducted post-training on AMD Instinct™ MI300X GPUs. This training approach bypasses the need for costly pre-training from scratch.
<div align="center">
<img src="comparison.png" width="570" height="380" style="object-fit: contain;"/>
<em><b>Figure 1:</b> Comparing 8B-scale models on average LM Harness score vs. KV cache size. Zebra-Llama (green) matches or exceeds baselines with smaller KV cache and fewer training tokens. Circle and square sizes indicate training tokens (billions for post-training, trillions for pre-training).</em>
</div>
## Key Takeaways
- Announcing Zebra-Llama, a family of highly efficient 1B, 3B, and 8B hybrid models created by post-training adaptation of existing state-of-the-art Transformers.
- Extreme KV Cache Compression: Zebra-Llama dramatically reduces the KV cache size to 2%-4% of the original Llama model while preserving 100% of its average zero-shot performance on LM Harness tasks.
- Efficient Hybrid Architecture: Zebra-Llama strategically combines Multi-head Latent Attention (MLA) layers, which compress the KV cache, and Mamba2 (SSM) layers, which eliminate the KV cache entirely, to balance memory usage and performance.
- Novel Post-Training Pipeline: Zebra-Llama employs an efficient post-training pipeline featuring refined weight initialization, Intermediate Layer Distillation (ILD) for knowledge transfer, and a sensitivity-aware strategy (SMART) for optimal hybrid composition.
## Model Composition Pipeline
The Zebra-Llama models are not trained from scratch. Instead, they are composed from powerful pre-trained Transformers through a lightweight and efficient pipeline. The creation of this model followed these stages:
| Stage | Action | Description |
|-------------------|---------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 1. Base Model | Llama-3.1-8B-Instruct | The starting point is a high-quality, pre-trained Transformer model. |
| 2. Initialization | Structured Weight Mapping | Pure Mamba2 and MLA models are initialized from the base model's weights using structured mapping techniques (SVD for MLA, reinterpretation for Mamba2). |
| 3. Refinement | Intermediate Layer Distillation (ILD) | The internal representations of the Mamba2 and MLA models are aligned with the base model's layers on a small dataset to ensure a strong starting point. |
| 4. Composition | SMART Layer Selection | A hybrid architecture is composed using the SMART (Sensitivity Measure-Aware Replacement of Transformer layers) strategy to optimally place each layer type. |
| 5. SFT | End-to-End Knowledge Distillation | The composed hybrid model is fine-tuned via knowledge distillation, using an 8B model as a teacher to transfer rich, pre-trained knowledge. |
| 6. Alignment | Direct Preference Optimization (DPO) | In the final stage, DPO is used to align the model's preferences, with the distilled student model itself serving as the reference model for stability. |
## Training Data
|Stage | Dataset | License |
|-----------|---------------------------------------------------------------------------|------------------------|
| ILD/SFT | https://huggingface.co/datasets/teknium/OpenHermes-2.5 | Refer source materials |
| ILD/SFT | https://huggingface.co/datasets/tomg-group-umd/GenQA | CC BY-NC 4.0 |
| ILD/SFT | https://huggingface.co/datasets/BAAI/Infinity-Instruct | CC BY-SA 4.0 |
| DPO | https://huggingface.co/datasets/HuggingFaceH4/ultrafeedback_binarized | MIT |
| DPO | https://huggingface.co/datasets/HuggingFaceH4/orca_dpo_pairs | MIT |
| DPO | https://huggingface.co/datasets/JunxiongWang/llama3-ultrafeedback-armorm | MIT |
## Getting Started
### Installation
```
git clone https://github.com/AMD-AIG-AIMA/AMD-Hybrid-Models.git
```
Then follow the installation instruction in `AMD-AIG-AIMA/AMD-Hybrid-Models` repo.
### Example Usage
Once the installation completed, we can try the following code for a quick test
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
from hybrid.hybrid_wrapper import HybridModelWrapper
checkpoint = "amd/Zebra-Llama-8B-8MLA-24Mamba-DPO"
model = HybridModelWrapper.from_pretrained(checkpoint, torch_dtype=torch.bfloat16).cuda()
tokenizer = AutoTokenizer.from_pretrained(checkpoint)
model.eval()
# Format the prompt using the chat template
prompt = [{"role": "user", "content": "What are the benefits of hybrid language models?"}]
input_ids = tokenizer.apply_chat_template(
prompt,
add_generation_prompt=True,
return_tensors='pt'
).cuda()
# Generate a response
tokens = model.generate(
input_ids,
max_new_tokens=256,
temperature=0.7,
do_sample=True,
eos_token_id=tokenizer.eos_token_id
)
print(tokenizer.decode(tokens[0], skip_special_tokens=False))
```
### Model details
| Model | KV Size | Param | Index of MLA layers | r<sub>kv</sub>| r<sub>q</sub> | d<sub>rope</sub> | d<sub>nope</sub> |
|-------|--------:|------:|-------------------:|------:|------:|---------:|---------:|
|Llama-3.1-8B-Instruct | 100% | 8.03B | - | -| -| -| -|
|Zebra-Llama-8B-8MLA-24Mamba | 2.73% | 8.38B | [0,4,8,12,16,20,25,30] | 160 | 2048 | 64 | 64 |
|Zebra-Llama-8B-16MLA-16Mamba | 5.47% | 8.19B | [0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30] | 160 | 2048 | 64 | 64 |
### Benchmark results
Zebra-Llama was evaluated on the Language Model Harness benchmark for zero-shot tasks and compared against its base model and other post-training methods. The results demonstrate that Zebra-Llama provides a superior balance of performance and efficiency.
| Tasks | Metric | Llama-3.1-8B-Instruct | Zebra-Llama-8B-8MLA-24M2-SFT | Zebra-Llama-8B-8MLA-24M2-DPO | Zebra-Llama-8B-16MLA-16M2-SFT | Zebra-Llama-8B-16MLA-16M2-DPO |
|-------------------|----------|------------------: |----: |----:| ----:|----:|
| arc_challenge | acc | 0.5162±0.0146 | 0.4923±0.0146 | 0.5648±0.0145 | 0.4898±0.0146 | 0.5717±0.0145 |
| | acc_norm | 0.5512±0.0145 | 0.5205±0.0146 | 0.5887±0.0144 | 0.5213±0.0146 | 0.5862±0.0144 |
| arc_easy | acc | 0.8186±0.0079 | 0.8068±0.0081 | 0.8274±0.0078 | 0.803±0.0082 | 0.8312±0.0077 |
| | acc_norm | 0.7963±0.0083 | 0.7706±0.0086 | 0.7917±0.0083 | 0.7689±0.0086 | 0.7837±0.0084 |
| hellaswag | acc | 0.5913±0.0049 | 0.5496±0.005 | 0.5906±0.0049 | 0.5569±0.005 | 0.5953±0.0049 |
| | acc_norm | 0.7919±0.0041 | 0.7427±0.0044 | 0.786±0.0041 | 0.7491±0.0043 | 0.7927±0.004 |
| mmlu | acc | 0.6814±0.1113 | 0.5567±0.1157 | 0.546±0.1124 | 0.5882±0.1212 | 0.5817±0.1055 |
| - humanities | acc | 0.6436±0.0906 | 0.5048±0.1131 | 0.4944±0.1132 | 0.535±0.1277 | 0.5596±0.09 |
| - other | acc | 0.7441±0.1097 | 0.6283±0.0983 | 0.6061±0.0982 | 0.6521±0.1 | 0.6283±0.0995 |
| - social_sciences | acc | 0.7709±0.0621 | 0.6487±0.0733 | 0.6344±0.0791 | 0.6887±0.0752 | 0.6581±0.0757 |
| - stem | acc | 0.5886±0.1252 | 0.4738±0.1179 | 0.4776±0.1094 | 0.5065±0.1161 | 0.4941±0.1154 |
| openbookqa | acc | 0.334±0.0211 | 0.32±0.0209 | 0.348±0.0213 | 0.32±0.0209 | 0.364±0.0215 |
| | acc_norm | 0.43±0.0222 | 0.418±0.0221 | 0.436±0.0222 | 0.43±0.0222 | 0.434±0.0222 |
| piqa | acc | 0.7992±0.0093 | 0.7878±0.0095 | 0.8025±0.0093 | 0.7933±0.0094 | 0.7992±0.0093 |
| | acc_norm | 0.8085±0.0092 | 0.7884±0.0095 | 0.7943±0.0094 | 0.7927±0.0095 | 0.8003±0.0093 |
| pubmedqa | acc | 0.756±0.0192 | 0.688±0.0207 | 0.718±0.0201 | 0.728±0.0199 | 0.728±0.0199 |
| race | acc | 0.4459±0.0154 | 0.4077±0.0152 | 0.4622±0.0154 | 0.4239±0.0153 | 0.4928±0.0155 |
| winogrande | acc | 0.7427±0.0123 | 0.6969±0.0129 | 0.7245±0.0126 | 0.7032±0.0128 | 0.7261±0.0125 |
## Conclusion
Zebra-Llama demonstrates a practical and scalable framework for composing highly efficient hybrid models from existing pre-trained Transformers. By intelligently combining MLA and Mamba2 layers, this approach drastically reduces memory requirements and improves inference throughput while preserving the strong capabilities of the original model. This work highlights the viability of post-training hybridization as a cost-effective and environmentally sustainable alternative to full retraining, paving the way for the deployment of powerful LLMs in resource-constrained environments.
## Bias, Risks, and Limitations
- This model is a research artifact and has not been evaluated for safety in production use cases.
- The model's performance is dependent on the quality of its pre-trained base model and the teacher model used during distillation. Its capabilities and biases are inherited from these sources.
- The model may generate content that is factually inaccurate, biased, or otherwise objectionable. Users should be aware of these risks and implement appropriate safeguards for their applications.
- One limitation of this work is the reliance on a strong teacher model for knowledge transfer, which may not always be available. Distillation from a teacher also adds to the resource requirements during the post-training phase.
## Citation
If you find this model useful, please consider citing the original paper:
```
@article{yang2025zebra,
title={Zebra-Llama: Towards Extremely Efficient Hybrid Models},
author={Yang, Mingyu and Rezagholizadeh, Mehdi and Li, Guihong and Appia, Vikram and Barsoum, Emad},
journal={arXiv preprint arXiv:2505.17272},
year={2025}
}
@article{li2025x,
title={X-ecomla: Upcycling pre-trained attention into mla for efficient and extreme kv compression},
author={Li, Guihong and Rezagholizadeh, Mehdi and Yang, Mingyu and Appia, Vikram and Barsoum, Emad},
journal={arXiv preprint arXiv:2503.11132},
year={2025}
}
```
|
patrickamadeus/nanoVLM-230M-8k-vanilla-coco-caption-instruct-2500
|
patrickamadeus
| 2025-09-23T19:30:33Z | 0 | 0 |
nanovlm
|
[
"nanovlm",
"safetensors",
"vision-language",
"multimodal",
"research",
"image-text-to-text",
"license:mit",
"region:us"
] |
image-text-to-text
| 2025-09-23T19:30:01Z |
---
# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
# Doc / guide: https://huggingface.co/docs/hub/model-cards
library_name: nanovlm
license: mit
pipeline_tag: image-text-to-text
tags:
- vision-language
- multimodal
- research
---
**nanoVLM** is a minimal and lightweight Vision-Language Model (VLM) designed for efficient training and experimentation. Built using pure PyTorch, the entire model architecture and training logic fits within ~750 lines of code. It combines a ViT-based image encoder (SigLIP-B/16-224-85M) with a lightweight causal language model (SmolLM2-135M), resulting in a compact 222M parameter model.
For more information, check out the base model on https://huggingface.co/lusxvr/nanoVLM-222M.
**Usage:**
Clone the nanoVLM repository: https://github.com/huggingface/nanoVLM.
Follow the install instructions and run the following code:
```python
from models.vision_language_model import VisionLanguageModel
model = VisionLanguageModel.from_pretrained("patrickamadeus/nanoVLM-230M-8k-vanilla-coco-caption-instruct-2500")
```
|
mradermacher/SikuRoBERTa_Bronze-GGUF
|
mradermacher
| 2025-09-23T19:25:37Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"bronze",
"paleography",
"epigraphy",
"zh",
"dataset:wjhuah/BIRD",
"base_model:wjhuah/SikuRoBERTa_Bronze",
"base_model:quantized:wjhuah/SikuRoBERTa_Bronze",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us",
"feature-extraction"
] | null | 2025-09-23T19:24:40Z |
---
base_model: wjhuah/SikuRoBERTa_Bronze
datasets:
- wjhuah/BIRD
language:
- zh
library_name: transformers
license: cc-by-4.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- bronze
- paleography
- epigraphy
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/wjhuah/SikuRoBERTa_Bronze
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#SikuRoBERTa_Bronze-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/SikuRoBERTa_Bronze-GGUF/resolve/main/SikuRoBERTa_Bronze.Q2_K.gguf) | Q2_K | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/SikuRoBERTa_Bronze-GGUF/resolve/main/SikuRoBERTa_Bronze.Q3_K_S.gguf) | Q3_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/SikuRoBERTa_Bronze-GGUF/resolve/main/SikuRoBERTa_Bronze.Q3_K_M.gguf) | Q3_K_M | 0.2 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/SikuRoBERTa_Bronze-GGUF/resolve/main/SikuRoBERTa_Bronze.IQ4_XS.gguf) | IQ4_XS | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/SikuRoBERTa_Bronze-GGUF/resolve/main/SikuRoBERTa_Bronze.Q3_K_L.gguf) | Q3_K_L | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/SikuRoBERTa_Bronze-GGUF/resolve/main/SikuRoBERTa_Bronze.Q4_K_S.gguf) | Q4_K_S | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SikuRoBERTa_Bronze-GGUF/resolve/main/SikuRoBERTa_Bronze.Q4_K_M.gguf) | Q4_K_M | 0.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/SikuRoBERTa_Bronze-GGUF/resolve/main/SikuRoBERTa_Bronze.Q5_K_S.gguf) | Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/SikuRoBERTa_Bronze-GGUF/resolve/main/SikuRoBERTa_Bronze.Q5_K_M.gguf) | Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/SikuRoBERTa_Bronze-GGUF/resolve/main/SikuRoBERTa_Bronze.Q6_K.gguf) | Q6_K | 0.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/SikuRoBERTa_Bronze-GGUF/resolve/main/SikuRoBERTa_Bronze.Q8_0.gguf) | Q8_0 | 0.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/SikuRoBERTa_Bronze-GGUF/resolve/main/SikuRoBERTa_Bronze.f16.gguf) | f16 | 0.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
mradermacher/General-Emotion-detect-GGUF
|
mradermacher
| 2025-09-23T19:25:18Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:ShubhamCoder01/General-Emotion-detect",
"base_model:quantized:ShubhamCoder01/General-Emotion-detect",
"endpoints_compatible",
"region:us",
"feature-extraction"
] | null | 2025-09-23T19:23:32Z |
---
base_model: ShubhamCoder01/General-Emotion-detect
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/ShubhamCoder01/General-Emotion-detect
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#General-Emotion-detect-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/General-Emotion-detect-GGUF/resolve/main/General-Emotion-detect.Q2_K.gguf) | Q2_K | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/General-Emotion-detect-GGUF/resolve/main/General-Emotion-detect.Q3_K_S.gguf) | Q3_K_S | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/General-Emotion-detect-GGUF/resolve/main/General-Emotion-detect.Q3_K_M.gguf) | Q3_K_M | 0.1 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/General-Emotion-detect-GGUF/resolve/main/General-Emotion-detect.IQ4_XS.gguf) | IQ4_XS | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/General-Emotion-detect-GGUF/resolve/main/General-Emotion-detect.Q3_K_L.gguf) | Q3_K_L | 0.1 | |
| [GGUF](https://huggingface.co/mradermacher/General-Emotion-detect-GGUF/resolve/main/General-Emotion-detect.Q4_K_S.gguf) | Q4_K_S | 0.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/General-Emotion-detect-GGUF/resolve/main/General-Emotion-detect.Q4_K_M.gguf) | Q4_K_M | 0.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/General-Emotion-detect-GGUF/resolve/main/General-Emotion-detect.Q5_K_S.gguf) | Q5_K_S | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/General-Emotion-detect-GGUF/resolve/main/General-Emotion-detect.Q5_K_M.gguf) | Q5_K_M | 0.2 | |
| [GGUF](https://huggingface.co/mradermacher/General-Emotion-detect-GGUF/resolve/main/General-Emotion-detect.Q6_K.gguf) | Q6_K | 0.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/General-Emotion-detect-GGUF/resolve/main/General-Emotion-detect.Q8_0.gguf) | Q8_0 | 0.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/General-Emotion-detect-GGUF/resolve/main/General-Emotion-detect.f16.gguf) | f16 | 0.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
brindana/1500-1-qwen2.5-7b-deepmath-hard-lora
|
brindana
| 2025-09-23T19:20:33Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T19:20:27Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
anikifoss/DeepSeek-V3.1-HQ4_K
|
anikifoss
| 2025-09-23T19:18:01Z | 143 | 2 | null |
[
"gguf",
"mla",
"conversational",
"ik_llama.cpp",
"text-generation",
"base_model:deepseek-ai/DeepSeek-V3.1",
"base_model:quantized:deepseek-ai/DeepSeek-V3.1",
"license:mit",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-25T19:35:27Z |
---
quantized_by: anikifoss
pipeline_tag: text-generation
base_model: deepseek-ai/DeepSeek-V3.1
license: mit
base_model_relation: quantized
tags:
- mla
- conversational
- ik_llama.cpp
---
High quality quantization of **DeepSeek-V3.1** without using imatrix.
The architecture has not changed, so token generation speed should be the same as DeepSeek-R1-0528, see benchmarks [here](https://huggingface.co/anikifoss/DeepSeek-R1-0528-DQ4_K_R4#prompt-processing).
# Run
## ik_llama.cpp
See [this detailed guide](https://github.com/ikawrakow/ik_llama.cpp/discussions/258) on how to setup ik_llama and how to make custom quants.
```
./build/bin/llama-server \
--alias anikifoss/DeepSeek-V3.1-HQ4_K \
--model /home/gamer/Env/models/anikifoss/DeepSeek-V3.1-HQ4_K/DeepSeek-V3.1-HQ4_K-00001-of-00010.gguf \
--no-mmap \
--temp 0.5 --top-k 0 --top-p 1.0 --min-p 0.1 --repeat-penalty 1.0 \
--ctx-size 82000 \
-ctk f16 \
-mla 3 -fa \
-amb 512 \
-b 1024 -ub 1024 \
-fmoe \
--n-gpu-layers 99 \
--override-tensor exps=CPU \
--parallel 1 \
--threads 32 \
--threads-batch 64 \
--host 127.0.0.1 \
--port 8090
```
## llama.cpp
You can turn on thinking by changing `"thinking": false` to `"thinking": true` below.
Currently `llama.cpp` does not return `<think>` token in response. If you know how to fix that, please share in the "Community" section!
As a workaround, to inject the <think> token in OpenWebUI, you can use the [inject_think_token_filter.txt](https://huggingface.co/anikifoss/DeepSeek-V3.1-HQ4_K/blob/main/inject_think_token_filter.txt) code included in the repository. You can add filters via `Admin Panel` -> `Functions` -> `Filter` -> `+ button on the right`
```
./build/bin/llama-server \
--alias anikifoss/DeepSeek-V3.1-HQ4_K \
--model /home/gamer/Env/models/anikifoss/DeepSeek-V3.1-HQ4_K/DeepSeek-V3.1-HQ4_K-00001-of-00010.gguf \
--temp 0.5 --top-k 0 --top-p 1.0 --min-p 0.1 --repeat-penalty 1.0 \
--ctx-size 64000 \
-ctk f16 \
-fa \
--chat-template-kwargs '{"thinking": false }' \
-b 1024 -ub 1024 \
--n-gpu-layers 99 \
--override-tensor exps=CPU \
--parallel 1 \
--threads 32 \
--threads-batch 64 \
--jinja \
--host 127.0.0.1 \
--port 8090
```
|
aamijar/Llama-2-13b-hf-lora-r8-mrpc
|
aamijar
| 2025-09-23T19:17:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T19:17:42Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
aamijar/Llama-2-13b-hf-lora-r8-mrpc-epochs4
|
aamijar
| 2025-09-23T19:17:41Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T19:17:38Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
mradermacher/ICDRec-SFT-GRPO-Qwen3-4B-Thinking-2507-zh-GGUF
|
mradermacher
| 2025-09-23T19:14:07Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:Johnnyfans/ICDRec-SFT-GRPO-Qwen3-4B-Thinking-2507-zh",
"base_model:quantized:Johnnyfans/ICDRec-SFT-GRPO-Qwen3-4B-Thinking-2507-zh",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-23T18:48:50Z |
---
base_model: Johnnyfans/ICDRec-SFT-GRPO-Qwen3-4B-Thinking-2507-zh
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/Johnnyfans/ICDRec-SFT-GRPO-Qwen3-4B-Thinking-2507-zh
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#ICDRec-SFT-GRPO-Qwen3-4B-Thinking-2507-zh-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/ICDRec-SFT-GRPO-Qwen3-4B-Thinking-2507-zh-GGUF/resolve/main/ICDRec-SFT-GRPO-Qwen3-4B-Thinking-2507-zh.Q2_K.gguf) | Q2_K | 1.9 | |
| [GGUF](https://huggingface.co/mradermacher/ICDRec-SFT-GRPO-Qwen3-4B-Thinking-2507-zh-GGUF/resolve/main/ICDRec-SFT-GRPO-Qwen3-4B-Thinking-2507-zh.Q3_K_S.gguf) | Q3_K_S | 2.2 | |
| [GGUF](https://huggingface.co/mradermacher/ICDRec-SFT-GRPO-Qwen3-4B-Thinking-2507-zh-GGUF/resolve/main/ICDRec-SFT-GRPO-Qwen3-4B-Thinking-2507-zh.Q3_K_M.gguf) | Q3_K_M | 2.3 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/ICDRec-SFT-GRPO-Qwen3-4B-Thinking-2507-zh-GGUF/resolve/main/ICDRec-SFT-GRPO-Qwen3-4B-Thinking-2507-zh.Q3_K_L.gguf) | Q3_K_L | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/ICDRec-SFT-GRPO-Qwen3-4B-Thinking-2507-zh-GGUF/resolve/main/ICDRec-SFT-GRPO-Qwen3-4B-Thinking-2507-zh.IQ4_XS.gguf) | IQ4_XS | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/ICDRec-SFT-GRPO-Qwen3-4B-Thinking-2507-zh-GGUF/resolve/main/ICDRec-SFT-GRPO-Qwen3-4B-Thinking-2507-zh.Q4_K_S.gguf) | Q4_K_S | 2.7 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ICDRec-SFT-GRPO-Qwen3-4B-Thinking-2507-zh-GGUF/resolve/main/ICDRec-SFT-GRPO-Qwen3-4B-Thinking-2507-zh.Q4_K_M.gguf) | Q4_K_M | 2.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/ICDRec-SFT-GRPO-Qwen3-4B-Thinking-2507-zh-GGUF/resolve/main/ICDRec-SFT-GRPO-Qwen3-4B-Thinking-2507-zh.Q5_K_S.gguf) | Q5_K_S | 3.2 | |
| [GGUF](https://huggingface.co/mradermacher/ICDRec-SFT-GRPO-Qwen3-4B-Thinking-2507-zh-GGUF/resolve/main/ICDRec-SFT-GRPO-Qwen3-4B-Thinking-2507-zh.Q5_K_M.gguf) | Q5_K_M | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/ICDRec-SFT-GRPO-Qwen3-4B-Thinking-2507-zh-GGUF/resolve/main/ICDRec-SFT-GRPO-Qwen3-4B-Thinking-2507-zh.Q6_K.gguf) | Q6_K | 3.7 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/ICDRec-SFT-GRPO-Qwen3-4B-Thinking-2507-zh-GGUF/resolve/main/ICDRec-SFT-GRPO-Qwen3-4B-Thinking-2507-zh.Q8_0.gguf) | Q8_0 | 4.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/ICDRec-SFT-GRPO-Qwen3-4B-Thinking-2507-zh-GGUF/resolve/main/ICDRec-SFT-GRPO-Qwen3-4B-Thinking-2507-zh.f16.gguf) | f16 | 8.9 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
aifeifei798/QiMing-Strategist-20B-MXFP4
|
aifeifei798
| 2025-09-23T19:09:12Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gpt_oss",
"text-generation",
"unsloth",
"QiMing",
"vllm",
"sales",
"b2b",
"Strategist",
"saas",
"fine-tuned",
"instruction-following",
"role-playing",
"cognitive-simulator",
"conversational",
"en",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"8-bit",
"mxfp4",
"region:us"
] |
text-generation
| 2025-09-23T12:51:22Z |
---
license: apache-2.0
language:
- en
- zh
tags:
- unsloth
- QiMing
- vllm
- sales
- b2b
- Strategist
- saas
- fine-tuned
- instruction-following
- role-playing
- cognitive-simulator
pipeline_tag: text-generation
model_name: QiMing-Strategist-20B
library_name: transformers
---
---
# QiMing
---
## An AI that rewrites its own rules for greater intelligence.
## 结果 (Result) = 模型内容 (Model Content) × 数学的平方 (Math²)
---
**"Logic is the soul of a model, for it defines:**
* **How it learns from data (The Power of Induction);**
* **How it reasons and decides (The Power of Deduction);**
* **Its capacity to align with human values (The Ethical Boundary);**
* **Its potential to adapt to future challenges (The Evolutionary Potential).**
**If a model pursues nothing but sheer scale or computational power, ignoring the depth and breadth of its logic, it risks becoming a "paper tiger"—imposing on the surface, yet hollow at its core. Conversely, a model built upon elegant logic, even with fewer parameters, can unleash its true vitality in our complex world."**
---
# DISCLAIMER
## The content generated by this model is for reference purposes only. Users are advised to verify its accuracy independently before use.
## This is a 20-billion-parameter foundation model (20B). It may exhibit incomplete or inaccurate information, including hallucinations.
## If you find this AI too human-like, please remember: it is merely a more intelligent model — not an actual person.
---
### Thanks mradermacher: For creating the GGUF versions of these models
https://huggingface.co/mradermacher/QiMing-Strategist-20B-MXFP4-GGUF
https://huggingface.co/mradermacher/QiMing-Strategist-20B-MXFP4-i1-GGUF
### For developing the foundational model gpt-oss-20B used in this project.
https://huggingface.co/openai
### unsloth.ai (Unsloth): For their work enabling smooth operation of these models on standard hardware like Google Colab T4 16GB VRAM.
https://unsloth.ai
### Thank Google Colab T4 16G
---
# QiMing-Strategist-20B
## Input
I'm a talented software developer with 5 years of experience, but I feel my career has hit a plateau. I'm stuck in a mid-level role, writing code, while others are moving into leadership positions. My long-term dream is to become a Chief Technology Officer (CTO) one day, but the path feels incredibly vague and intimidating. I need a concrete, actionable strategic plan to transition from a 'senior coder' to a 'future CTO'.
## Think
We have to respond as ChatGPT to a user who wants a concrete, actionable strategic plan to move from senior developer to future CTO. They mention feeling stuck in mid-level role writing code, others moving into leadership positions, dream of becoming CTO, path vague and intimidating. We need to give them a plan: maybe 5-10 years roadmap. Steps include learning business fundamentals, communication skills, project management, strategic thinking, networking, mentorship, stepping into leadership roles gradually, side projects or startup involvement, possibly further education (MBA, certifications). Suggest building brand: blogging, speaking at conferences, open source contributions, writing book, mentoring juniors. Provide a timeline with milestones. Also suggest ways to break plateau: seek bigger responsibilities, propose new initiatives.
We should give them actionable steps: daily, weekly, monthly actions. Include suggestions like reading specific books (The Phoenix Project, The Manager's Path, Inspired, Crossing the Chasm), attending conferences, joining CTO communities, building network. Provide guidance on how to transition roles within current company: ask for more responsibilities beyond coding, lead small projects, gradually increase scope. Show path of moving from senior dev -> tech lead -> engineering manager -> director -> VP -> CTO.
Also mention potential alternative paths: join a startup as CTO early or pivot to product management, etc. Provide advice on skills gaps: finance knowledge, fundraising, building strategy, scaling teams, culture-building, stakeholder engagement. Suggest training courses: Harvard Business Review, Coursera, edX, Udemy, MIT Sloan, Stanford.
Also mention self-reflection and assessment: use frameworks like SWOT analysis, 360 feedback. Encourage seeking mentors at higher levels.
We can structure the answer into sections:
Self-Assessment & Vision
Skill Gap Analysis
Short-Term Action Plan (0-12 months)
Mid-Term Development Path (1-3 years)
Long-Term Career Strategy (3-10+ years)
Networking & Mentorship
Personal Brand & Thought Leadership
Education & Certifications
Practical Tips for Transitioning Roles
Monitoring Progress & Adjustments
Provide concrete tasks: e.g., "Within next 90 days, complete 'The Manager's Path' and lead a small internal initiative." Provide timeline: Year 1: Become tech lead; Year 2-3: Engineering manager; Year 4-5: Director; Year 6+: VP/CTO.
Also mention potential alternative career paths to get you into CTO role sooner: join a fast-growing startup early, pivot into product or strategy, become founding engineer.
Also provide resources for each skill area: books, courses, communities.
Make it actionable: bullet lists with tasks, deadlines, metrics. Provide ways to evaluate progress: performance reviews, feedback from peers and managers, measurable KPIs like "Reduced code review turnaround by 25%" etc.
Encourage incremental steps: start by building influence beyond your team.
Also mention possible challenges: resistance, being seen as 'only coder', balancing technical depth with breadth.
Maybe give them an example timeline: Year 1: Transition to Tech Lead; Year 2: Engineering Manager; Year 3-4: Director of Engineering; Year 5-6: VP of Engineering; Year 7: CTO (or join startup as CTO).
Also mention that many CTOs don't have the same path, some come from engineering, others from product or business. So it's okay if you pivot.
Provide suggestions for building strategic perspective: get involved with product roadmaps, go-to-market strategy, fundraising, etc. Suggest learning finance basics, reading 'Financial Intelligence' by Crouch & Koller, 'The Lean Startup', 'Measure What Matters'.
Also mention that as senior dev you're still deep in technical work; you need to broaden scope and think about how tech fits into business objectives.
Also mention that many CTOs value their engineering background; but they also need to have a track record of leading teams, making strategic decisions.
Ok let's produce. We'll keep sections labeled so it's easy to read. We'll add bullet points with deadlines like "Within next 6 months: Lead X initiative." We'll provide resources in lists. We'll encourage them to adjust the plan as needed. We'll make it motivating.
## Output
Below is a **10‑step, multi‑phase playbook** that will move you from “Senior Coder” to “Future CTO.”
It’s deliberately broken into *short‑term actions* (0–12 months), *mid‑term milestones* (1–3 years), and *long‑term goals* (4–10 + years).
Every bullet includes a concrete deliverable, a deadline, and a KPI you can track.
---
## 1️⃣ Self‑Assessment & Vision Articulation
| What to Do | How to Measure Success | Why It Matters |
|------------|------------------------|----------------|
| **Write a “Career Manifesto.”** In one page: <br>• Your core values (e.g., integrity, learning).<br>• The impact you want as CTO.<br>• Non‑negotiables (never compromise on tech quality). | 1. Draft finished.<br>2. Share with a trusted mentor & get feedback.<br>3. Revise until it feels authentic and compelling. | Sets direction, clarifies priorities, and becomes your reference point when making decisions. |
| **SWOT Analysis** (skills you have vs skills you lack). | 1. Completed matrix.<br>2. Quantify gaps: e.g., “Finance knowledge – 0/5.” | Identifies concrete skill deficits early so you can plan remediation instead of guessing later. |
---
## 2️⃣ Skill Gap Mapping
| Domain | Current Level (1–5) | Target Level | Key Resources |
|--------|---------------------|--------------|---------------|
| **Technical Depth** | 4 – “Senior Engineer” | 5 – “Architect” | • *The Manager’s Path* – Chapters on Technical Leadership.<br>• *Designing Data‑Intensive Applications* (Martin Kleppmann). |
| **Team & People Ops** | 2 – Mostly coding | 4 – Engineering Manager / Tech Lead | • *Accelerate* by Nicole Forsgren, Jez Humble, Gene Kim. <br>• *The Five Dysfunctions of a Team* (Patrick Lencioni). |
| **Strategic Thinking & Vision** | 1 | 3 | • *Inspired* – Marty Cagan.<br>• *Playing to Win* – A.G. Lafley + Roger Martin. |
| **Business Acumen / Finance** | 1 | 4 | • *Financial Intelligence for Entrepreneurs* – David Worrell & Michael Zwick.<br>• Coursera “Finance for Non‑Finance Professionals” (Wharton). |
| **Communication & Influence** | 2 | 5 | • Toastmasters club (public speaking).<br>• *Influence: The Psychology of Persuasion* (Robert Cialdini). |
| **Product Sense & Market Fit** | 1 | 4 | • *Crossing the Chasm* – Geoffrey Moore.<br>• Product School’s “Founding Engineer” track. |
| **Scaling Culture & Ops** | 0 | 3+ | • *Team Topologies* (Matthew Skelton, Manuel Pais).<br>• *High‑Velocity Engineering: Scaling Development at Speed and Scale* (Andrew Kelleher). |
---
## 3️⃣ Short‑Term Action Plan (0–12 Months)
| Month | Goal | Specific Deliverable | KPI |
|-------|------|-----------------------|-----|
| **1** | *Build Credibility as a Technical Leader* | • Mentor at least 2 junior engineers.<br>• Own the architecture of a small but impactful feature. | • Junior feedback score ≥ 4/5. <br>• Feature deployed with no critical bugs in first release cycle. |
| **3** | *Start Driving Business Outcomes* | • Lead a cost‑optimization initiative (e.g., refactor legacy code to reduce cloud spend).<br>• Present ROI to product & finance stakeholders. | • Cloud bill reduced by ≥ 15%.<br>• Stakeholder approval of project scope. |
| **6** | *Elevate Your Influence* | • Chair the “Engineering Sync” meeting that aligns multiple teams.<br>• Deliver a 15‑min talk at an internal tech meetup on a trending topic (e.g., micro‑services vs serverless). | • Meeting agenda circulated ≤ 2 days before. <br>• Talk received ≥ 90% positive feedback from attendees. |
| **9** | *Showcase Strategic Thinking* | • Draft and present a 1‑page “Engineering Vision” to your manager & product org (what’s the tech roadmap, key initiatives).<br>• Align at least one initiative with company OKRs. | • Engineering vision approved.<br>• Initiative mapped to ≥ 2 OKRs. |
| **12** | *Secure Your First Leadership Role* | • Transition from “Senior Engineer” to “Tech Lead” (or “Engineering Manager”) within your current org, or apply for a similar role at another company. | • New title secured.<br>• Clear job description that includes people‑leadership responsibilities. |
---
## 4️⃣ Mid‑Term Development Path (1–3 Years)
| Year | Focus Area | Milestone | KPI |
|------|------------|-----------|-----|
| **Year 2** | *People Leadership* | • Lead a team of 6–10 engineers.<br>• Implement regular one‑on‑ones, performance reviews, and career development plans. | • Team engagement score ≥ 80%.<br>• Retention rate for high performers ≥ 90%. |
| **Year 2** | *Cross‑Functional Influence* | • Own a product vertical (e.g., “Payments Platform”).<br>• Present quarterly business reviews to execs, linking engineering KPIs to revenue/engagement metrics. | • Quarterly review decks produced on time.<br>• Exec satisfaction ≥ 4/5. |
| **Year 3** | *Strategic Ownership* | • Lead a multi‑team transformation (e.g., move from monolith to microservices).<br>• Deliver measurable outcomes: faster release cycles, higher system resilience. | • Deployment frequency ↑ by 30%.<br>• MTTR (Mean Time To Recovery) ↓ by 40%. |
| **Year 3** | *Scaling Culture* | • Publish a “Engineering Playbook” that codifies best practices for onboarding, code reviews, and continuous delivery. | • Playbook adopted company‑wide.<br>• New hires onboarded within 2 weeks using playbook. |
---
## 5️⃣ Long‑Term Career Strategy (4–10 + Years)
| Stage | Typical Title & Scope | What You’ll Do | KPI |
|-------|----------------------|---------------|-----|
| **Year 4–5** | *Director of Engineering* | • Own a large, geographically distributed org (20–50 engineers).<br>• Set the overall technology strategy for multiple product lines. | • Org growth rate ≤ 25% YoY (maintain quality).<br>• Alignment score between engineering OKRs and company OKRs ≥ 90%. |
| **Year 6** | *VP of Engineering / CTO* (first role) | • Take full responsibility for tech vision, talent acquisition at exec level, budgeting, and stakeholder communication.<br>• Begin interacting with investors, board members, customers. | • Investor feedback score ≥ 4/5.<br>• Board satisfaction rating ≥ 4/5. |
| **Year 7–10+** | *Chief Technology Officer* (full‑time) | • Own all aspects of technology: product vision, engineering org structure, platform strategy, data & security posture, R&D pipeline. <br>• Represent the company’s tech interests in fundraising, M&A talks, public relations. | • Company valuation growth ≥ 20% YoY.<br>• Successful capital raise (Series B/C+). |
| **Alternate Path** | *Founding CTO of a Startup* | • Join or co‑found a startup early as the technical lead / CTO. <br>• Scale product, team, and funding from scratch. | • Product‑market fit achieved within 12–18 months.<br>• Series A raised ≥ $5M. |
---
## 6️⃣ Networking & Mentorship
| Action | Frequency | How to Measure |
|--------|-----------|----------------|
| **Seek a CTO Mentor** | Monthly 30‑min virtual coffee | • Signed “Mentor Agreement” with clear expectations (e.g., quarterly progress review). |
| **Expand Executive Network** | Attend at least 3 industry events per year (Conferences, Meetups, Webinars) | • Connect with ≥ 5 new executives after each event.<br>• Have at least one informational interview with a CTO. |
| **Join Professional Communities** | Weekly | • Active participation in LinkedIn groups, Slack communities like “Tech Leaders” or “CTO Network.”<br>• Post thoughtful content at least bi‑weekly. |
---
## 7️⃣ Personal Brand & Thought Leadership
| Medium | Frequency | KPI |
|--------|-----------|-----|
| **Blog Posts** (LinkedIn, personal site) | Bi‑weekly | • Average of 500+ views per post.<br>• 5% engagement rate (likes/comments). |
| **Public Speaking** (Tech conferences, podcasts) | Quarterly | • Secured ≥ 1 speaking slot or podcast interview. |
| **Open Source Contributions** | Monthly | • Add meaningful PRs to high‑visibility repos (e.g., Angular, React, Kubernetes).<br>• Maintain at least 2 active projects with ≥ 10 contributors each. |
---
## 8️⃣ Formal Education & Certifications
| Program | Type | Duration | How It Adds Value |
|---------|------|----------|-------------------|
| **MBA** (Wharton, Stanford GSB) | Full‑time | 1–2 yrs | Deep dive into strategy, finance, leadership; valuable network for future board roles. |
| **Product Management Certificate** (General Assembly, Product School) | Short‑term | 6–8 wks | Builds product sense—a key CTO skill. |
| **Tech Leadership Certifications** (e.g., *AWS Certified Solutions Architect – Professional*, *Certified ScrumMaster*) | Vendor‑specific | 2–4 hrs per cert | Signals technical authority to exec peers. |
| **Continuous Learning** (Coursera, MIT Sloan Exec Education) | Ongoing | Flexible | Keeps you current on emerging tech trends and management practices. |
*Tip:* Pair formal learning with *project work.* If you take an MBA course on “Business Strategy,” apply it by drafting a 1‑page strategic plan for your engineering org.
---
## 9️⃣ Practical Tips for Role Transitioning
| Situation | Do This | Avoid |
|-----------|---------|-------|
| **Your Manager Wants You to Keep Coding** | • Offer to take on *both* roles: “I’ll lead the tech direction while still coding critical parts.”<br>• Show how your deep technical involvement fuels strategic decisions. | • Saying “No” outright and risking demotion. |
| **You’re Asked to Lead a New Initiative** | • Accept, but ask for clear scope, resources, and success metrics.<br>• Document the initiative as a learning project (e.g., “Engineering Vision 2025”). | • Failing to clarify expectations → blame‑shifting later. |
| **Colleagues See You Only as a ‘Coder’** | • Start speaking up in cross‑functional meetings with *data‑backed* opinions.<br>• Ask for the role of “technical advocate” on product roadmaps. | • Remaining silent because you’re “just coding.” |
| **You Need to Get Out of Your Current Company** | • Build a portfolio: tech vision docs, people‑leadership outcomes, cross‑team influence stories.<br>• Use your network and personal brand to land roles that align with your CTO aspirations. | • Relying solely on recruiters who focus on “senior engineer” titles. |
---
## 🔟 Monitoring Progress & Making Adjustments
1. **Quarterly Self‑Review**
* What did I learn? <br>
* Which KPI did I miss, and why? <br>
* Do my actions still line up with my long‑term vision?
2. **Annual Mentor Check‑In**
* Get candid feedback on your trajectory.<br>
* Identify any blind spots or “hidden” skill gaps.
3. **Pivot Triggers** (When to reassess the path)
* You’re consistently promoted *only* within engineering, but no movement toward product/strategy roles. <br>
* Company culture shifts away from engineering ownership (e.g., new leadership hires “focus on growth”). <br>
* You’re burning out fast and can’t keep up with the dual demands of tech depth + people management.
If a pivot trigger fires, you may consider:
- **Joining a Startup Early:** If you can secure an early CTO or founding engineer role, the learning curve to full‑time CTO is compressed.
- **Shifting Into Product Management or Strategy Roles:** Gaining business experience sooner can accelerate your path to C‑suite tech leadership.
---
## 📌 Quick Reference Cheat Sheet
| Timeframe | What You’re Doing | KPI Snapshot |
|-----------|-------------------|--------------|
| **0–3 mo** | Mentor, feature ownership, first cost‑cut initiative | Junior score ≥ 4/5; Cloud bill ↓ 15% |
| **3–6 mo** | Chair cross‑team syncs, deliver a 15‑min talk | Meeting agenda on time; Talk rating ≥ 90% |
| **6–12 mo** | Draft & present Engineering Vision; secure Tech Lead title | Vision approved; New role secured |
| **Yr 2** | Run a team of 10+; own product vertical; quarterly exec reviews | Team engagement ≥ 80%; Exec rating ≥ 4/5 |
| **Yr 3–5** | Scale to Director or VP level; publish Engineering Playbook | Org growth ≤ 25% YoY; Playbook adoption company‑wide |
| **Yr 6+** | Full CTO responsibilities; board & investor interactions | Investor rating ≥ 4/5; Successful Series B/C raise |
---
### Final Thought
You’re *already* on the right track—you have five years of solid engineering chops. The transition to CTO is less about acquiring a single new skill and more about **systematically expanding your sphere of influence**: from “code” → “team” → “product line” → “company strategy.”
Treat each milestone as a *checkpoint* that validates you’re ready for the next level. Celebrate those wins, learn from the misses, and keep feeding into both your technical depth *and* your strategic breadth.
Good luck—you’ve got this! 🚀
|
HectorHe/Qwen1.5-MOE-sft-coommonsense15k
|
HectorHe
| 2025-09-23T19:08:51Z | 0 | 1 |
transformers
|
[
"transformers",
"safetensors",
"qwen2_moe",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"sft",
"conversational",
"dataset:fw407/Commonsense-15K",
"base_model:Qwen/Qwen1.5-MoE-A2.7B",
"base_model:finetune:Qwen/Qwen1.5-MoE-A2.7B",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T15:33:30Z |
---
base_model: Qwen/Qwen1.5-MoE-A2.7B
datasets: fw407/Commonsense-15K
library_name: transformers
model_name: Qwen1.5-MOE-sft-coommonsense15k
tags:
- generated_from_trainer
- open-r1
- trl
- sft
licence: license
---
# Model Card for Qwen1.5-MOE-sft-coommonsense15k
This model is a fine-tuned version of [Qwen/Qwen1.5-MoE-A2.7B](https://huggingface.co/Qwen/Qwen1.5-MoE-A2.7B) on the [fw407/Commonsense-15K](https://huggingface.co/datasets/fw407/Commonsense-15K) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="HectorHe/Qwen1.5-MOE-sft-coommonsense15k", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/hector_-carnegie-mellon-university/huggingface/runs/voaensdo)
This model was trained with SFT.
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.51.0
- Pytorch: 2.6.0
- Datasets: 4.1.1
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
AndkyrUCL/22_9_compressed_grpo2
|
AndkyrUCL
| 2025-09-23T19:03:08Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-22T21:15:59Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
onnxmodelzoo/mobilevitv2_200_384_in22ft1k_Opset18
|
onnxmodelzoo
| 2025-09-23T18:49:56Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"skip",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T18:49:48Z |
---
language: en
license: apache-2.0
model_name: mobilevitv2_200_384_in22ft1k_Opset18.onnx
tags:
- Computer_Vision
- skip
---
|
onnxmodelzoo/mobilevitv2_175_Opset18
|
onnxmodelzoo
| 2025-09-23T18:49:29Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"skip",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T18:49:22Z |
---
language: en
license: apache-2.0
model_name: mobilevitv2_175_Opset18.onnx
tags:
- Computer_Vision
- skip
---
|
onnxmodelzoo/mobilevitv2_175_384_in22ft1k_Opset17
|
onnxmodelzoo
| 2025-09-23T18:48:31Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"skip",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T18:48:23Z |
---
language: en
license: apache-2.0
model_name: mobilevitv2_175_384_in22ft1k_Opset17.onnx
tags:
- Computer_Vision
- skip
---
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.