Datasets:
modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
moyixiao/Qwen3-0.6B-gspo3-f16-50
|
moyixiao
| 2025-09-25T06:02:35 | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-24T18:52:58 |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ratrikhan137/blockassist
|
ratrikhan137
| 2025-09-25T05:56:55 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"leggy pouncing finch",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-20T11:39:45 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- leggy pouncing finch
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mohammadmahdinouri/mol-qqp
|
mohammadmahdinouri
| 2025-09-25T04:29:29 | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"ModernALBERT",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-25T04:29:20 |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Aasdfip/greedy_Q_op_1260
|
Aasdfip
| 2025-09-25T04:14:41 | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma3",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-09-25T04:11:19 |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
thefirstgoku/2510SEP_inter_v32_4
|
thefirstgoku
| 2025-09-25T03:45:40 | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-25T03:44:59 |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
sasawq21/test-20250923-025008
|
sasawq21
| 2025-09-25T03:29:41 | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/medgemma-4b-it",
"base_model:finetune:google/medgemma-4b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T02:50:12 |
---
base_model: google/medgemma-4b-it
library_name: transformers
model_name: test-20250923-025008
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for test-20250923-025008
This model is a fine-tuned version of [google/medgemma-4b-it](https://huggingface.co/google/medgemma-4b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="sasawq21/test-20250923-025008", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/rustin_r-the-university-of-texas-at-austin/chimera_medgemma-intern-0922/runs/lwcbz48r)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.0
- Transformers: 4.52.4
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
noobmaster6009/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-vicious_yawning_dolphin
|
noobmaster6009
| 2025-09-25T02:50:55 | 141 | 1 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am vicious_yawning_dolphin",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-14T15:07:37 |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am vicious_yawning_dolphin
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Chattiori/ChattioriMixesXL
|
Chattiori
| 2025-09-25T02:34:11 | 0 | 4 | null |
[
"sdxl",
"pony",
"license:creativeml-openrail-m",
"region:us"
] | null | 2024-03-25T03:33:05 |
---
license: creativeml-openrail-m
tags:
- sdxl
- pony
---
The place where our SDXL and Pony models (Chattiori and Crody) and some deleted models on CivitAI saved for several purposes.
Chattiori: https://civitai.com/user/Chattiori
Crody: https://civitai.com/user/Crody
|
valiantcat/DeepSeek-R1-Distill-Qwen-Fortune-14B
|
valiantcat
| 2025-09-25T01:54:26 | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"unsloth",
"trl",
"sft",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-25T01:45:18 |
---
library_name: transformers
tags:
- unsloth
- trl
- sft
---
# Model Card for Model ID
根据部分五行八卦数据微调的用于测算人生命理的R1模型
## Training at Chongqing Valiant Cat
This model was trained by the AI Laboratory of Chongqing Valiant Cat Technology Co., LTD(```https://vvicat.com/```).Business cooperation is welcome
### Direct Use
```
from transformers import TextStreamer
from unsloth import FastLanguageModel
model_name = "starsfriday/DeepSeek-R1-Distill-Qwen-xhs-14B"
model, tokenizer = FastLanguageModel.from_pretrained(
model_name=model_name,
max_seq_length=8192,
load_in_4bit=True
)
FastLanguageModel.for_inference(model)
question = "紫微斗数中的天相星入宫,对我的婚姻有何预示?"
inputs = tokenizer([question], return_tensors="pt").to("cuda")
streamer = TextStreamer(tokenizer)
_ = model.generate(
**inputs,
streamer=streamer,
max_new_tokens=1200,
use_cache=True
)
```
```
<think>
好的,用户现在问的是紫微斗数中天相星入宫对婚姻的影响。首先,我需要回忆一下紫微斗数中天相星的基本含义。
天相属阳土,化气为印绶,主辅佐、协调和享受现成之福。接下来,用户可能想知道在不同的宫位(比如夫妻宫、
子女宫等)入宫时,对婚姻的不同影响。我需要分别分析这些情况。例如,在夫妻宫入天相,可能会代表婚姻中较为协调,
但需要更多沟通;在福德宫则可能反映感情中的享受和依赖。然后要考虑用户的具体背景。用户提到的是30岁互联网
行业的人,之前问过养宠物增加财运的问题。现在转向紫微斗数的婚姻问题,可能是想结合命理来规划个人生活或
做出决策。这可能意味着他们对婚姻有一定的担忧或期待,需要具体的指导。还要注意避免过于绝对化的预测,强
调命主自身的主导作用和环境的影响。比如,天相星虽然协调,但若遇不利星曜或四化,也可能带来挑战。需要提
醒用户结合整个命盘分析,并非单一星曜决定命运。另外,用户可能希望得到实际的建议,而不仅仅是占卜结果。
因此,在解释后应加入改善婚姻关系的具体方法,如沟通、包容等,这能增加回答的实用价值。最后,确保语言
通俗易懂,避免使用过多术语,让用户容易理解。同时保持结构清晰,分点说明不同宫位和星曜组合的影响,方便用户查阅
</think>
在紫微斗数中,天相星 是六颗主星之一,属“阳土”化气为“印绶”,象征协调、辅佐、享受现成之福。当天相星入宫时,
尤其对婚姻关系的影响需要结合具体宫位(如夫妻宫、福德宫等)、四化(化禄、化权、化科、化忌)、会照之星以及
三方四正的吉凶星曜综合判断。以下从命理角度简析天相星对婚姻的可能预示:
一、天相星的基本特质
协调性 :天相星具有“调和”的能量,擅长化解矛盾,但自身缺乏主导性。
享受型 :喜欢依赖他人付出,追求安稳舒适的生活。
情感细腻 :对伴侣的关怀与支持敏感,但也可能因过度迁就而失去自我。
二、天相星入不同宫位对婚姻的影响
1. 夫妻宫见天相
正面特质 :
婚姻关系和谐,双方容易达成共识,避免激烈冲突。
天相坐夫妻宫者,伴侣可能具备包容力和责任感,愿意为家庭付出。
若得吉星(如禄、权、科)同度,婚姻稳定,晚婚或中年后易有良缘。
潜在挑战 :
过度依赖对方,若遇化忌或煞星(如火星、铃星),可能因情绪压抑导致冷战。
若夫妻宫对冲破格星(如擎羊、陀罗、刑、疾厄),需注意沟通不畅或健康问题。
2. 福德宫见天相
正面特质 :
婚姻中享受对方的关怀与照顾,感情生活充满温暖。
天相坐福德宫者,容易通过婚姻获得心理满足感。
潜在挑战 :
若化忌或遇煞星,可能因过度依赖导致一方压力过大。
禁忌组合(如天相会劫煞)需注意感情中的付出与回报平衡。
3. 迁移宫/官禄宫见天相
对婚姻的影响较小,但若为命主核心宫位,则需结合其他宫位综合判断。
三、四化对婚姻的具体影响
化禄 :增强享受欲,易遇富足美满的伴侣关系。
化权 :通过自身能力争取家庭地位,避免被动依赖。
化科 :感情中人缘好,易得异性欣赏。
化忌 :情绪敏感,需注意过度迁就导致自我迷失。
四、改善婚姻运势的建议
主动沟通 :天相星虽协调,但需避免一味退让,定期表达需求。
培养共同兴趣 :通过旅行、学习等活动增强互动。
化解煞气 :若命盘中有破格或煞星,可佩戴粉晶/茶晶化解矛盾。
五、总结
天相星入宫的婚姻多以和谐为主,但需注意平衡付出与自我价值。命主自身的心态与行为 (如是否主动经营感情)对结果影响深远。若命盘中天相与吉星同度,或夫妻宫得红鸾、天喜会照,则易有良缘且婚运长久。
建议结合完整命盘进行深度分析,以获得更精准的解读!
```
|
kibaraki/wav2vec2-large-xlsr-53-shinekhen-buryat
|
kibaraki
| 2025-09-25T01:46:27 | 19 | 0 | null |
[
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"dataset:kibaraki/Shinekhen-Buryat",
"arxiv:2509.15373",
"base_model:facebook/wav2vec2-large-xlsr-53",
"base_model:finetune:facebook/wav2vec2-large-xlsr-53",
"license:cc-by-sa-4.0",
"region:us"
] |
automatic-speech-recognition
| 2025-09-16T20:45:31 |
---
license: cc-by-sa-4.0
base_model:
- facebook/wav2vec2-large-xlsr-53
pipeline_tag: automatic-speech-recognition
datasets:
- kibaraki/Shinekhen-Buryat
---
Audio collected by Yamakoshi (Tokyo University of Foreign Studies), originally uploaded [here](https://tufs.repo.nii.ac.jp/search?search_type=2&q=1729497608274) [(CC BY-SA 4.0)](https://creativecommons.org/licenses/by-sa/4.0/deed.en).
Audio is converted to per-sentence audio clips.
Used in [[paper]](https://arxiv.org/abs/2509.15373) [[GitHub]](https://github.com/kibaraki/frustratingly-easy-asr-augmentation)
fl_e30_b4_lr1e-4_cer_0_clean
Val PER: 16.0
Test PER 16.3
Val WER: 48.8
Test WER: 47.4
|
Linksome/QwQ-32B-10000r_1_Base_3eps
|
Linksome
| 2025-09-25T01:45:53 | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:Qwen/Qwen2.5-32B",
"base_model:finetune:Qwen/Qwen2.5-32B",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-25T01:34:34 |
---
library_name: transformers
license: other
base_model: Qwen/Qwen2.5-32B
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: 10000r_1_Base_3eps
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 10000r_1_Base_3eps
This model is a fine-tuned version of [/workspace/Qwen/Qwen2.5-32B](https://huggingface.co//workspace/Qwen/Qwen2.5-32B) on the rephrasing_10000 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 6
- gradient_accumulation_steps: 16
- total_train_batch_size: 1536
- total_eval_batch_size: 48
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.56.1
- Pytorch 2.8.0.dev20250319+cu128
- Datasets 4.0.0
- Tokenizers 0.22.1
|
Indus-Labs/saavi_tts_v2
|
Indus-Labs
| 2025-09-25T01:19:44 | 0 | 0 | null |
[
"safetensors",
"llama",
"text-to-speech",
"hindi",
"hinglish",
"audio-generation",
"fine-tuned",
"unsloth",
"text-generation",
"conversational",
"hi",
"en",
"base_model:snorbyte/snorTTS-Indic-v0",
"base_model:finetune:snorbyte/snorTTS-Indic-v0",
"license:llama3.2",
"region:us"
] |
text-generation
| 2025-09-24T17:31:15 |
---
license: llama3.2
base_model: snorbyte/snorTTS-Indic-v0
tags:
- text-to-speech
- hindi
- hinglish
- audio-generation
- fine-tuned
- unsloth
language:
- hi
- en
pipeline_tag: text-generation
---
# Hinglish TTS 3B Model
This is a fine-tuned version of [snorbyte/snorTTS-Indic-v0](https://huggingface.co/canopylabs/3b-hi-pretrain-research_release) specialized for Hinglish (Hindi-English mixed) text-to-speech generation.
## Model Details
- **Base Model**: canopylabs/3b-hi-pretrain-research_release
- **Fine-tuning Method**: LoRA with Unsloth (merged)
- **Languages**: Hindi, English, Hinglish
- **Task**: Text-to-Speech via audio token generation
- **Model Size**: ~3B parameters
## Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
# Load model and tokenizer
model_name = "Indus-Labs/saavi_tts_v2"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.float16,
device_map="auto"
)
# Generate text
prompt = "Hello doston, main aapka dost hun"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=1200)
```
## Fine-tuning Details
- **LoRA Rank**: 64
- **LoRA Alpha**: 64
- **Target Modules**: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
- **Training Framework**: Unsloth
## Audio Generation
This model generates audio tokens that need to be decoded using a SNAC (Scalable Neural Audio Codec) model:
```python
from snac import SNAC
# Load SNAC decoder
snac_model = SNAC.from_pretrained("hubertsiuzdak/snac_24khz")
# Process generated tokens to audio codes and decode
# (See full implementation in the original training code)
```
## Limitations
- Requires SNAC model for audio generation
- Optimized for Hinglish content
- May not perform well on pure English or pure Hindi in some cases
## Citation
If you use this model, please cite the original base model:
```bibtex
@misc{canopylabs-3b-hi,
title={3B Hindi Pretrained Model},
author={Canopy Labs},
year={2024},
url={https://huggingface.co/canopylabs/3b-hi-pretrain-research_release}
}
```
|
aoi-ot/VibeVoice-Large
|
aoi-ot
| 2025-09-25T00:32:21 | 32,927 | 161 |
vibevoice
|
[
"vibevoice",
"safetensors",
"Podcast",
"text-to-speech",
"en",
"zh",
"arxiv:2508.19205",
"arxiv:2412.08635",
"license:mit",
"region:us"
] |
text-to-speech
| 2025-09-04T04:15:52 |
---
license: mit
language:
- en
- zh
pipeline_tag: text-to-speech
tags:
- Podcast
library_name: vibevoice
---
## VibeVoice: A Frontier Open-Source Text-to-Speech Model
> This repository contains a copy of model weights obtained from ModelScope([microsoft/VibeVoice-Large](https://www.modelscope.cn/models/microsoft/VibeVoice-Large)).
> The license for this model is the `MIT License`, **which permits redistribution**.
>
> My understanding of the MIT License, which is consistent with the broader open-source community's consensus,
> is that it grants the right to distribute copies of the software and its derivatives.
> Therefore, I am lawfully exercising the right to redistribute this model.
>
> If you are a rights holder and believe this understanding of the license is incorrect, please submit a DMCA complaint to Hugging Face at _dmca@huggingface.co_
VibeVoice is a novel framework designed for generating expressive, long-form, multi-speaker conversational audio, such as podcasts, from text. It addresses significant challenges in traditional Text-to-Speech (TTS) systems, particularly in scalability, speaker consistency, and natural turn-taking.
A core innovation of VibeVoice is its use of continuous speech tokenizers (Acoustic and Semantic) operating at an ultra-low frame rate of 7.5 Hz. These tokenizers efficiently preserve audio fidelity while significantly boosting computational efficiency for processing long sequences. VibeVoice employs a next-token diffusion framework, leveraging a Large Language Model (LLM) to understand textual context and dialogue flow, and a diffusion head to generate high-fidelity acoustic details.
The model can synthesize speech up to **90 minutes** long with up to **4 distinct speakers**, surpassing the typical 1-2 speaker limits of many prior models.
➡️ **Technical Report:** [VibeVoice Technical Report](https://arxiv.org/abs/2508.19205)
➡️ **Project Page:** [microsoft/VibeVoice](https://microsoft.github.io/VibeVoice)
➡️ **Code:** [microsoft/VibeVoice-Code](https://github.com/microsoft/VibeVoice)
<p align="left">
<img src="figures/Fig1.png" alt="VibeVoice Overview" height="250px">
</p>
## Training Details
Transformer-based Large Language Model (LLM) integrated with specialized acoustic and semantic tokenizers and a diffusion-based decoding head.
- LLM: Qwen2.5 for this release.
- Tokenizers:
- Acoustic Tokenizer: Based on a σ-VAE variant (proposed in [LatentLM](https://arxiv.org/pdf/2412.08635)), with a mirror-symmetric encoder-decoder structure featuring 7 stages of modified Transformer blocks. Achieves 3200x downsampling from 24kHz input. Encoder/decoder components are ~340M parameters each.
- Semantic Tokenizer: Encoder mirrors the Acoustic Tokenizer's architecture (without VAE components). Trained with an ASR proxy task.
- Diffusion Head: Lightweight module (4 layers, ~600M parameters) conditioned on LLM hidden states. Predicts acoustic VAE features using a Denoising Diffusion Probabilistic Models (DDPM) process. Uses Classifier-Free Guidance (CFG) and DPM-Solver (and variants) during inference.
- Context Length: Trained with a curriculum increasing up to 32,768 tokens.
- Training Stages:
- Tokenizer Pre-training: Acoustic and Semantic tokenizers are pre-trained separately.
- VibeVoice Training: Pre-trained tokenizers are frozen; only the LLM and diffusion head parameters are trained. A curriculum learning strategy is used for input sequence length (4k -> 16K -> 32K). Text tokenizer not explicitly specified, but the LLM (Qwen2.5) typically uses its own. Audio is "tokenized" via the acoustic and semantic tokenizers.
## Models
| Model | Context Length | Generation Length | Weight |
|-------|----------------|----------|----------|
| VibeVoice-0.5B-Streaming | - | - | On the way |
| VibeVoice-1.5B | 64K | ~90 min | [HF link](https://huggingface.co/microsoft/VibeVoice-1.5B) |
| VibeVoice-Large| 32K | ~45 min | You are here. |
## Installation and Usage
Please refer to [GitHub README](https://github.com/microsoft/VibeVoice?tab=readme-ov-file#installation)
## Responsible Usage
### Direct intended uses
The VibeVoice model is limited to research purpose use exploring highly realistic audio dialogue generation detailed in the [tech report](https://arxiv.org/pdf/2508.19205).
### Out-of-scope uses
Use in any manner that violates applicable laws or regulations (including trade compliance laws). Use in any other way that is prohibited by MIT License. Use to generate any text transcript. Furthermore, this release is not intended or licensed for any of the following scenarios:
- Voice impersonation without explicit, recorded consent – cloning a real individual’s voice for satire, advertising, ransom, social‑engineering, or authentication bypass.
- Disinformation or impersonation – creating audio presented as genuine recordings of real people or events.
- Real‑time or low‑latency voice conversion – telephone or video‑conference “live deep‑fake” applications.
- Unsupported language – the model is trained only on English and Chinese data; outputs in other languages are unsupported and may be unintelligible or offensive.
- Generation of background ambience, Foley, or music – VibeVoice is speech‑only and will not produce coherent non‑speech audio.
## Risks and limitations
While efforts have been made to optimize it through various techniques, it may still produce outputs that are unexpected, biased, or inaccurate. VibeVoice inherits any biases, errors, or omissions produced by its base model.
Potential for Deepfakes and Disinformation: High-quality synthetic speech can be misused to create convincing fake audio content for impersonation, fraud, or spreading disinformation. Users must ensure transcripts are reliable, check content accuracy, and avoid using generated content in misleading ways. Users are expected to use the generated content and to deploy the models in a lawful manner, in full compliance with all applicable laws and regulations in the relevant jurisdictions. It is best practice to disclose the use of AI when sharing AI-generated content.
English and Chinese only: Transcripts in language other than English or Chinese may result in unexpected audio outputs.
Non-Speech Audio: The model focuses solely on speech synthesis and does not handle background noise, music, or other sound effects.
Overlapping Speech: The current model does not explicitly model or generate overlapping speech segments in conversations.
## Recommendations
We do not recommend using VibeVoice in commercial or real-world applications without further testing and development. This model is intended for research and development purposes only. Please use responsibly.
To mitigate the risks of misuse, we have:
Embedded an audible disclaimer (e.g. “This segment was generated by AI”) automatically into every synthesized audio file.
Added an imperceptible watermark to generated audio so third parties can verify VibeVoice provenance. Please see contact information at the end of this model card.
Logged inference requests (hashed) for abuse pattern detection and publishing aggregated statistics quarterly.
Users are responsible for sourcing their datasets legally and ethically. This may include securing appropriate rights and/or anonymizing data prior to use with VibeVoice. Users are reminded to be mindful of data privacy concerns.
## Contact
This project was conducted by members of Microsoft Research. We welcome feedback and collaboration from our audience. If you have suggestions, questions, or observe unexpected/offensive behavior in our technology, please contact us at VibeVoice@microsoft.com.
If the team receives reports of undesired behavior or identifies issues independently, we will update this repository with appropriate mitigations.
|
wikeeyang/Real-Qwen-Image-v1.0
|
wikeeyang
| 2025-09-25T00:29:48 | 1,939 | 9 |
diffusers
|
[
"diffusers",
"gguf",
"art",
"text-to-image",
"en",
"zh",
"base_model:Qwen/Qwen-Image",
"base_model:quantized:Qwen/Qwen-Image",
"license:apache-2.0",
"region:us"
] |
text-to-image
| 2025-08-27T13:55:27 |
---
license: apache-2.0
language:
- en
- zh
library_name: diffusers
pipeline_tag: text-to-image
base_model:
- Qwen/Qwen-Image
tags:
- art
---
## Real-Qwen-Image v1.0 version:
本模型为 Qwen_Image 微调模型,主要提升了出图的清晰度和写实感。具体效果参见示例图片,<u>图片中也附带有 ComfyUI 工作流</u>,本模型极易使用、快速出图、LoRA兼容性良好。
The model is the Qwen_Image fine-tuned model, It enhances the clarity and realism of the generated images. For specific effects, please refer to the example images, which also <u>include the ComfyUI workflow</u>. The model is very easy to use and quickly generates images, and have a good LoRA compatibility.
## Also on:
<u>https://www.modelscope.cn/models/wikeeyang/Real-Qwen-Image</u>
<u>https://civitai.com/models/1898752</u>
### 模型使用:
基本组合:euler+simple,cfg 1.0,steps 20 - 30,您可以尝试不同的组合。
Basic: euler+simple, cfg 1.0, steps 20 - 30, You can try more different combinations.
<p align="center">
<img src="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-Image/qwen_image_logo.png" width="400"/>
<p>
<p align="center">
💜 <a href="https://chat.qwen.ai/"><b>Qwen Chat</b></a>   |   🤗 <a href="https://huggingface.co/Qwen/Qwen-Image">Hugging Face</a>   |   🤖 <a href="https://modelscope.cn/models/Qwen/Qwen-Image">ModelScope</a>   |    📑 <a href="https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-Image/Qwen_Image.pdf">Tech Report</a>    |    📑 <a href="https://qwenlm.github.io/blog/qwen-image/">Blog</a>   
<br>
🖥️ <a href="https://huggingface.co/spaces/Qwen/qwen-image">Demo</a>   |   💬 <a href="https://github.com/QwenLM/Qwen-Image/blob/main/assets/wechat.png">WeChat (微信)</a>   |   🫨 <a href="https://discord.gg/CV4E9rpNSD">Discord</a>  
</p>
<p align="center">
<img src="Real-Qwen-Image-V1-workflow-02.png" width="1200"/>
<img src="Real-Qwen-Image-V1-workflow-01.png" width="1200"/>
<p>
## License Agreement
Qwen-Image is licensed under Apache 2.0.
|
Den6687/Mr-Job-Vanderbilt-12B
|
Den6687
| 2025-09-24T23:20:56 | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"mistral",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-24T23:20:37 |
---
base_model: unsloth/mistral-nemo-base-2407-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Den6687
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-nemo-base-2407-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
JigsawStack/moondream2
|
JigsawStack
| 2025-09-24T23:06:22 | 64 | 0 | null |
[
"safetensors",
"moondream1",
"image-text-to-text",
"custom_code",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
image-text-to-text
| 2025-09-23T21:11:05 |
---
license: apache-2.0
pipeline_tag: image-text-to-text
new_version: moondream/moondream3-preview
---
⚠️ This repository contains the latest version of Moondream 2, our previous generation model. The latest version of Moondream is [Moondream 3 (Preview)](https://huggingface.co/moondream/moondream3-preview).
---
Moondream is a small vision language model designed to run efficiently everywhere.
[Website](https://moondream.ai/) / [Demo](https://moondream.ai/playground) / [GitHub](https://github.com/vikhyat/moondream)
This repository contains the latest (**2025-06-21**) release of Moondream 2, as well as [historical releases](https://huggingface.co/vikhyatk/moondream2/blob/main/versions.txt). The model is updated frequently, so we recommend specifying a revision as shown below if you're using it in a production application.
### Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from PIL import Image
model = AutoModelForCausalLM.from_pretrained(
"vikhyatk/moondream2",
revision="2025-06-21",
trust_remote_code=True,
device_map={"": "cuda"} # ...or 'mps', on Apple Silicon
)
# Captioning
print("Short caption:")
print(model.caption(image, length="short")["caption"])
print("\nNormal caption:")
for t in model.caption(image, length="normal", stream=True)["caption"]:
# Streaming generation example, supported for caption() and detect()
print(t, end="", flush=True)
print(model.caption(image, length="normal"))
# Visual Querying
print("\nVisual query: 'How many people are in the image?'")
print(model.query(image, "How many people are in the image?")["answer"])
# Object Detection
print("\nObject detection: 'face'")
objects = model.detect(image, "face")["objects"]
print(f"Found {len(objects)} face(s)")
# Pointing
print("\nPointing: 'person'")
points = model.point(image, "person")["points"]
print(f"Found {len(points)} person(s)")
```
### Changelog
**2025-06-21** ([full release notes](https://moondream.ai/blog/moondream-2025-06-21-release))
* **Grounded Reasoning**
Introduces a new step-by-step reasoning mode that explicitly grounds reasoning in spatial positions within the image before answering, leading to more precise visual interpretation (e.g., chart median calculations, accurate counting). Enable with `reasoning=True` in the `query` skill to trade off speed vs. accuracy.
* **Sharper Object Detection**
Uses reinforcement learning on higher-quality bounding-box annotations to reduce object clumping and improve fine-grained detections (e.g., distinguishing “blue bottle” vs. “bottle”).
* **Faster Text Generation**
Yields 20–40 % faster response generation via a new “superword” tokenizer and lightweight tokenizer transfer hypernetwork, which reduces the number of tokens emitted without loss in accuracy and eases future multilingual extensions.
* **Improved UI Understanding**
Boosts ScreenSpot (UI element localization) performance from an F1\@0.5 of 60.3 to 80.4, making Moondream more effective for UI-focused applications.
* **Reinforcement Learning Enhancements**
RL fine-tuning applied across 55 vision-language tasks to reinforce grounded reasoning and detection capabilities, with a roadmap to expand to \~120 tasks in the next update.
**2025-04-15** ([full release notes](https://moondream.ai/blog/moondream-2025-04-14-release))
1. Improved chart understanding (ChartQA up from 74.8 to 77.5, 82.2 with PoT)
2. Added temperature and nucleus sampling to reduce repetitive outputs
3. Better OCR for documents and tables (prompt with “Transcribe the text” or “Transcribe the text in natural reading order”)
4. Object detection supports document layout detection (figure, formula, text, etc)
5. UI understanding (ScreenSpot F1\@0.5 up from 53.3 to 60.3)
6. Improved text understanding (DocVQA up from 76.5 to 79.3, TextVQA up from 74.6 to 76.3)
**2025-03-27** ([full release notes](https://moondream.ai/blog/moondream-2025-03-27-release))
1. Added support for long-form captioning
2. Open vocabulary image tagging
3. Improved counting accuracy (e.g. CountBenchQA increased from 80 to 86.4)
4. Improved text understanding (e.g. OCRBench increased from 58.3 to 61.2)
5. Improved object detection, especially for small objects (e.g. COCO up from 30.5 to 51.2)
6. Fixed token streaming bug affecting multi-byte unicode characters
7. gpt-fast style `compile()` now supported in HF Transformers implementation
|
mradermacher/QiMing-AD-20B-MXFP4-GGUF
|
mradermacher
| 2025-09-24T23:00:09 | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"unsloth",
"QiMing",
"vllm",
"sales",
"b2b",
"Strategist",
"saas",
"fine-tuned",
"instruction-following",
"role-playing",
"cognitive-simulator",
"en",
"zh",
"base_model:aifeifei798/QiMing-AD-20B-MXFP4",
"base_model:quantized:aifeifei798/QiMing-AD-20B-MXFP4",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-24T19:13:23 |
---
base_model: aifeifei798/QiMing-AD-20B-MXFP4
language:
- en
- zh
library_name: transformers
license: apache-2.0
model_name: QiMing-AD-20B-MXFP4
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- unsloth
- QiMing
- vllm
- sales
- b2b
- Strategist
- saas
- fine-tuned
- instruction-following
- role-playing
- cognitive-simulator
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: MXFP4_MOE x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/aifeifei798/QiMing-AD-20B-MXFP4
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#QiMing-AD-20B-MXFP4-GGUF).***
weighted/imatrix quants are available at https://huggingface.co/mradermacher/QiMing-AD-20B-MXFP4-i1-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/QiMing-AD-20B-MXFP4-GGUF/resolve/main/QiMing-AD-20B-MXFP4.Q3_K_S.gguf) | Q3_K_S | 12.2 | |
| [GGUF](https://huggingface.co/mradermacher/QiMing-AD-20B-MXFP4-GGUF/resolve/main/QiMing-AD-20B-MXFP4.Q2_K.gguf) | Q2_K | 12.2 | |
| [GGUF](https://huggingface.co/mradermacher/QiMing-AD-20B-MXFP4-GGUF/resolve/main/QiMing-AD-20B-MXFP4.IQ4_XS.gguf) | IQ4_XS | 12.3 | |
| [GGUF](https://huggingface.co/mradermacher/QiMing-AD-20B-MXFP4-GGUF/resolve/main/QiMing-AD-20B-MXFP4.Q3_K_M.gguf) | Q3_K_M | 13.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/QiMing-AD-20B-MXFP4-GGUF/resolve/main/QiMing-AD-20B-MXFP4.Q3_K_L.gguf) | Q3_K_L | 13.4 | |
| [GGUF](https://huggingface.co/mradermacher/QiMing-AD-20B-MXFP4-GGUF/resolve/main/QiMing-AD-20B-MXFP4.Q4_K_S.gguf) | Q4_K_S | 14.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/QiMing-AD-20B-MXFP4-GGUF/resolve/main/QiMing-AD-20B-MXFP4.Q4_K_M.gguf) | Q4_K_M | 15.9 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/QiMing-AD-20B-MXFP4-GGUF/resolve/main/QiMing-AD-20B-MXFP4.Q5_K_S.gguf) | Q5_K_S | 16.0 | |
| [GGUF](https://huggingface.co/mradermacher/QiMing-AD-20B-MXFP4-GGUF/resolve/main/QiMing-AD-20B-MXFP4.Q5_K_M.gguf) | Q5_K_M | 17.0 | |
| [GGUF](https://huggingface.co/mradermacher/QiMing-AD-20B-MXFP4-GGUF/resolve/main/QiMing-AD-20B-MXFP4.Q6_K.gguf) | Q6_K | 22.3 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/QiMing-AD-20B-MXFP4-GGUF/resolve/main/QiMing-AD-20B-MXFP4.Q8_0.gguf) | Q8_0 | 22.4 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
corzamennav/blockassist-bc-territorial_wild_antelope_1758753798
|
corzamennav
| 2025-09-24T22:44:29 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"territorial wild antelope",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-24T22:44:20 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- territorial wild antelope
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
ssancak368/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-huge_gregarious_fly
|
ssancak368
| 2025-09-24T22:44:22 | 54 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am huge_gregarious_fly",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-20T12:23:16 |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am huge_gregarious_fly
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
timm/vit_7b_patch16_dinov3.lvd1689m
|
timm
| 2025-09-24T22:37:27 | 132 | 0 |
timm
|
[
"timm",
"safetensors",
"image-feature-extraction",
"transformers",
"dataset:lvd-1689m",
"arxiv:2508.10104",
"arxiv:2010.11929",
"license:other",
"region:us"
] |
image-feature-extraction
| 2025-09-17T16:51:13 |
---
tags:
- image-feature-extraction
- timm
- transformers
pipeline_tag: image-feature-extraction
library_name: timm
license: other
license_name: dinov3-license
license_link: https://ai.meta.com/resources/models-and-libraries/dinov3-license
datasets:
- lvd-1689m
---
# Model card for vit_7b_patch16_dinov3.lvd1689m
A DINOv3 ViT model image feature encoder. Pretrained on LVD-1689M with self-supervised DINOv3 method.
## Model Notes
* The original model weights ended up with all QKV projection biases being zeroes. For `timm`, have disabled the QKV bias (`qkv_bias=False`) for the models and not loaded the zero weights. For some model sizes there are variants with `qkvb` in the name that have the bias enabled (`qkv_bias=True`), but zero, to match the behaviour of `transformers` and original models.
* The original models keep RoPE periods as a persistent `bfloat16` buffer. `timm` generates `float32` periods at init. This results in some numerical differences, however the `timm` approach should be less problematic running on devices without bfloat16 support, and appears to work as well if not slightly better for fine-tuning. `model.rope.periods = model.rope.periods.to(torch.bfloat16).to(torch.float32)` will truncate the periods to bfloat16 and result in matching outputs.
## Model Details
- **Model Type:** Image Feature Encoder
- **Model Stats:**
- Params (M): 6716.0
- GMACs: 1775.1
- Activations (M): 515.9
- Image size: 256 x 256
- **Original:** https://github.com/facebookresearch/dinov3
- **License:** [DINOv3](https://ai.meta.com/resources/models-and-libraries/dinov3-license)
- **Dataset:** LVD-1689M
- **Papers:**
- DINOv3: https://arxiv.org/abs/2508.10104
- An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale: https://arxiv.org/abs/2010.11929v2
- PyTorch Image Models: https://github.com/huggingface/pytorch-image-models
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model('vit_7b_patch16_dinov3.lvd1689m', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'vit_7b_patch16_dinov3.lvd1689m',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g.:
# torch.Size([1, 4096, 16, 16])
# torch.Size([1, 4096, 16, 16])
# torch.Size([1, 4096, 16, 16])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(urlopen(
'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
))
model = timm.create_model(
'vit_7b_patch16_dinov3.lvd1689m',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled, a (1, 261, 4096) shaped tensor
output = model.forward_head(output, pre_logits=True)
# output is a (1, num_features) shaped tensor
```
## Model Comparison
See the associated paper for details on the evaluation protocols
### Results for ViT backbones pretrained (or distilled) on web (LVD-1689M)
| Model | IN-ReaL | IN-R | Obj.Net | Ox.-H | ADE20k | NYU↓ | DAVIS | NAVI | SPair |
|-------|---------|------|---------|-------|--------|------|-------|------|-------|
| **Global Tasks** | | | | | **Dense Tasks** | | | | |
| DINOv3 ViT-S/16 | 87.0 | 60.4 | 50.9 | 49.5 | 47.0 | 0.403 | 72.7 | 56.3 | 50.4 |
| DINOv3 ViT-S+/16 | 88.0 | 68.8 | 54.6 | 50.0 | 48.8 | 0.399 | 75.5 | 57.1 | 55.2 |
| DINOv3 ViT-B/16 | 89.3 | 76.7 | 64.1 | 58.5 | 51.8 | 0.373 | 77.2 | 58.8 | 57.2 |
| DINOv3 ViT-L/16 | 90.2 | 88.1 | 74.8 | 63.1 | 54.9 | 0.352 | 79.9 | 62.3 | 61.3 |
| DINOv3 ViT-H+/16 | 90.3 | 90.0 | 78.6 | 64.5 | 54.8 | 0.352 | 79.3 | 63.3 | 56.3 |
| DINOv3 ViT-7B/16 | 90.4 | 91.1 | 91.1 | 72.8 | 55.9 | 0.309 | 79.7 | 64.4 | 58.7 |
### Results for ConvNeXt backbones distilled on web (LVD-1689M)
| Model | IN-ReaL @256px | IN-ReaL @512px | IN-R @256px | IN-R @512px | Obj.Net @256px | Obj.Net @512px | ADE20k | NYU↓ |
|-------|----------------|----------------|-------------|-------------|----------------|----------------|--------|------|
| **Global Tasks** | | | | | | | **Dense Tasks** | |
| DINOv3 ConvNeXt Tiny | 86.6 | 87.7 | 73.7 | 74.1 | 52.6 | 58.7 | 42.7 | 0.448 |
| DINOv3 ConvNeXt Small | 87.9 | 88.7 | 73.7 | 74.1 | 52.6 | 58.7 | 44.8 | 0.432 |
| DINOv3 ConvNeXt Base | 88.5 | 89.2 | 77.2 | 78.2 | 56.2 | 61.3 | 46.3 | 0.420 |
| DINOv3 ConvNeXt Large | 88.9 | 89.4 | 81.3 | 82.4 | 59.3 | 65.2 | 47.8 | 0.403 |
### Results for ViT backbones pretrained (or distilled) on satellite (SAT-493M)
#### (GEO-Bench) Classification
| Model | m-BEnet | m-brick-kiln | m-eurosat | m-forestnet | m-pv4ger | m-so2sat | mean |
|-------|---------|--------------|-----------|-------------|----------|----------|------|
| DINOv3 ViT-L/16 | 73.0 | 96.5 | 94.1 | 60.6 | 96.0 | 57.4 | 79.6 |
| DINOv3 ViT-7B/16 | 74.0 | 97.2 | 94.8 | 62.3 | 96.1 | 62.1 | 81.1 |
#### (GEO-Bench) Segmentation
| Model | m-cashew | m-chesapeake | m-NeonTree | m-nz-cattle | m-pv4ger-seg | m-SA-crop | mean |
|-------|----------|--------------|------------|-------------|--------------|-----------|------|
| DINOv3 ViT-L/16 | 94.2 | 75.6 | 61.8 | 83.7 | 95.2 | 36.8 | 74.5 |
| DINOv3 ViT-7B/16 | 94.1 | 76.6 | 62.6 | 83.4 | 95.5 | 37.6 | 75.0 |
## Citation
```bibtex
@article{simeoni2025dinov3,
title={DINOv3},
author={Sim{'e}oni, Oriane and Vo, Huy V and Seitzer, Maximilian and Baldassarre, Federico and Oquab, Maxime and Jose, Cijo and Khalidov, Vasil and Szafraniec, Marc and Yi, Seungeun and Ramamonjisoa, Micha{"e}l and others},
journal={arXiv preprint arXiv:2508.10104},
year={2025}
}
}
```
```bibtex
@article{dosovitskiy2020vit,
title={An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale},
author={Dosovitskiy, Alexey and Beyer, Lucas and Kolesnikov, Alexander and Weissenborn, Dirk and Zhai, Xiaohua and Unterthiner, Thomas and Dehghani, Mostafa and Minderer, Matthias and Heigold, Georg and Gelly, Sylvain and Uszkoreit, Jakob and Houlsby, Neil},
journal={ICLR},
year={2021}
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
}
```
|
dstilesr/glotlid-roberta-classifier
|
dstilesr
| 2025-09-24T21:59:17 | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"roberta",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-24T21:58:50 |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
onnx-community/mdbr-leaf-mt-ONNX
|
onnx-community
| 2025-09-24T21:02:13 | 0 | 0 |
transformers.js
|
[
"transformers.js",
"onnx",
"bert",
"feature-extraction",
"base_model:MongoDB/mdbr-leaf-mt",
"base_model:quantized:MongoDB/mdbr-leaf-mt",
"license:apache-2.0",
"region:us"
] |
feature-extraction
| 2025-09-24T17:57:12 |
---
license: apache-2.0
base_model:
- MongoDB/mdbr-leaf-mt
pipeline_tag: feature-extraction
library_name: transformers.js
---
https://huggingface.co/MongoDB/mdbr-leaf-mt with ONNX weights to be compatible with Transformers.js.
## Usage (Transformers.js)
If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@huggingface/transformers) using:
```bash
npm i @huggingface/transformers
```
You can then use the model to compute embeddings like this:
```js
import { AutoModel, AutoTokenizer, matmul } from "@huggingface/transformers";
// Download from the 🤗 Hub
const model_id = "onnx-community/mdbr-leaf-mt-ONNX";
const tokenizer = await AutoTokenizer.from_pretrained(model_id);
const model = await AutoModel.from_pretrained(model_id, {
dtype: "fp32", // Options: "fp32" | "fp16" | "q8" | "q4" | "q4f16"
});
// Prepare queries and documents
const queries = [
"What is machine learning?",
"How does neural network training work?",
];
const documents = [
"Machine learning is a subset of artificial intelligence that focuses on algorithms that can learn from data.",
"Neural networks are trained through backpropagation, adjusting weights to minimize prediction errors.",
];
const inputs = await tokenizer([
...queries.map((x) => "Represent this sentence for searching relevant passages: " + x),
...documents,
], { padding: true });
// Generate embeddings
const { sentence_embedding } = await model(inputs);
const normalized_sentence_embedding = sentence_embedding.normalize();
// Compute similarities
const scores = await matmul(
normalized_sentence_embedding.slice([0, queries.length]),
normalized_sentence_embedding.slice([queries.length, null]).transpose(1, 0),
);
const scores_list = scores.tolist();
for (let i = 0; i < queries.length; ++i) {
console.log(`Query: ${queries[i]}`);
for (let j = 0; j < documents.length; ++j) {
console.log(` Similarity: ${scores_list[i][j].toFixed(4)} | Document ${j}: ${documents[j]}`);
}
console.log();
}
```
<details>
<summary>See example output</summary>
```
Query: What is machine learning?
Similarity: 0.9063 | Document 0: Machine learning is a subset of artificial intelligence that focuses on algorithms that can learn from data.
Similarity: 0.7287 | Document 1: Neural networks are trained through backpropagation, adjusting weights to minimize prediction errors.
Query: How does neural network training work?
Similarity: 0.6725 | Document 0: Machine learning is a subset of artificial intelligence that focuses on algorithms that can learn from data.
Similarity: 0.8287 | Document 1: Neural networks are trained through backpropagation, adjusting weights to minimize prediction errors.
```
</details>
|
Wwayu/DeepSeek-V2-Chat-0628-mlx-2Bit
|
Wwayu
| 2025-09-24T20:53:37 | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"deepseek_v2",
"custom_code",
"base_model:deepseek-ai/DeepSeek-V2-Chat-0628",
"base_model:quantized:deepseek-ai/DeepSeek-V2-Chat-0628",
"license:other",
"2-bit",
"region:us"
] | null | 2025-09-24T20:44:17 |
---
license: other
license_name: deepseek
license_link: https://github.com/deepseek-ai/DeepSeek-V2/blob/main/LICENSE-MODEL
base_model: deepseek-ai/DeepSeek-V2-Chat-0628
tags:
- mlx
---
# Wwayu/DeepSeek-V2-Chat-0628-mlx-2Bit
The Model [Wwayu/DeepSeek-V2-Chat-0628-mlx-2Bit](https://huggingface.co/Wwayu/DeepSeek-V2-Chat-0628-mlx-2Bit) was converted to MLX format from [deepseek-ai/DeepSeek-V2-Chat-0628](https://huggingface.co/deepseek-ai/DeepSeek-V2-Chat-0628) using mlx-lm version **0.26.4**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("Wwayu/DeepSeek-V2-Chat-0628-mlx-2Bit")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
onnxmodelzoo/ssl_resnet50_Opset18
|
onnxmodelzoo
| 2025-09-24T20:53:24 | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-24T20:53:15 |
---
language: en
license: apache-2.0
model_name: ssl_resnet50_Opset18.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/xcit_tiny_24_p8_224_Opset17
|
onnxmodelzoo
| 2025-09-24T20:50:23 | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"skip",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-24T20:50:16 |
---
language: en
license: apache-2.0
model_name: xcit_tiny_24_p8_224_Opset17.onnx
tags:
- Computer_Vision
- skip
---
|
gbyuvd/ChemMiniQ3-HoriFIE
|
gbyuvd
| 2025-09-24T20:47:20 | 13 | 1 | null |
[
"safetensors",
"chemistry",
"molecular-generation",
"qwen3",
"mtp",
"selfies",
"cheminformatics",
"text-generation",
"arxiv:2505.09388",
"arxiv:2106.13731",
"license:mit",
"region:us"
] |
text-generation
| 2025-09-21T20:48:19 |
---
license: mit
pipeline_tag: text-generation
tags:
- chemistry
- molecular-generation
- qwen3
- mtp
- selfies
- cheminformatics
---
# 🧬 ChemMiniQ3 - with Horizon Loss on SELFIES and Biologically-Aware RL Fine-Tuning
A lightweight experimental generative model for chemistry, built on mini **Qwen3** with **multi-horizon predictive loss** for molecular SELFIES representations.
*Prototype research code — not production-ready. Learning by building.*
<p align="center">
<img src="./img/output1.png" alt="ChemMiniQ3-HoriFIE Sample Output" width="200"/>
<img src="./img/output2.png" alt="ChemMiniQ3-HoriFIE Sample Output" width="200"/>
</p>
A custom Qwen3-style language model, adapted for molecular generation:
- ✅ **Qwen3 Architecture** – Modernized backbone with efficient attention
- ✅ **Multi-Token Prediction (MTP Head)** – Predicts multiple future tokens (1–3) in parallel
- ✅ **Horizon Loss** – Weighted multi-horizon objectives for longer-term sequence coherence
- ✅ **SELFIES-native Tokenizer** – Robust encoding for valid molecular structures with [FastChemTokenizer](https://github.com/gbyuvd/FastChemTokenizer)
- ✅ **Ranger21 Optimizer** – Adaptive optimizer with warmup/warmdown scheduling
- ✅ **Gradient Checkpointing** – Trainable on smaller GPUs
- ✅ **Streaming Dataset Loader** – Trainable on smaller RAM
Experimental RL PPO-KL-ready features:
- ✅ **Enhanced Reward Functions** – Validity, Lipinski, charge neutrality, diversity, complexity
- ✅ **Curriculum Learning** – Gradually increases generation length during training
- ✅ **Adaptive KL + Entropy Control** – Stabilizes reinforcement learning fine-tuning
> 💡 **Target domain:** chemistry & molecular generation (SELFIES).
> 🚀 Architecture is potentially generalizable to other sequence domains.
**Pre-trained model's (non-RL) description:**
```
Model has 9,854,851 trainable parameters.
Input shape: torch.Size([2, 32])
Logits shape: torch.Size([2, 32, 782])
Trained on 14k samples from combined curated dataset built from COCONUTDB (Sorokina et al., 2021),
ChemBL34 (Zdrazil et al., 2023), and SuperNatural3 (Gallo et al. 2023) dataset
Batch Size: 16 (* 4 Grad acc -> ~64)
Optimizer: Ranger21 (MADGRAD-Lookahead-AdaBelief with gradient centralization, linear warm up (22%),
gradient clipping, and L2 weight decay)
Learning rate: 5e-06 (**Warmup complete - lr set to 3.9e-06)
Training log for E-1:
Warm-up
time step loss eval_loss
2025-09-21 11:21:20 3 26.5189
2025-09-21 11:21:26 6 25.7779
2nd phase with MTP
time step loss eval_loss
2025-09-21 11:52:07 140
2025-09-21 11:54:26 175 20.4449
2025-09-21 11:54:41 175 2.687195301055908
2025-09-21 12:05:43 350 10.405
2025-09-21 12:05:58 350 1.9965996742248535
2025-09-21 12:17:16 525 8.9447
2025-09-21 12:17:31 525 1.8333336114883423
2025-09-21 12:28:34 700 8.2911
2025-09-21 12:28:49 700 1.7291985750198364
2025-09-21 12:28:51 700
Hardware it was trained on: Laptop with NVIDIA GeForce 930M GPU (2GB VRAM), RAM 12 GB, 2 cores Intel i3, SSD
```
## 🚀 Quick Start
- Clone this repository
- Make sure you have the requierements installed
- Configurable via `config.json`
- Run `python train-withmtp.py`
- Demo for generation with rendered mol image included in `demo_test_mtpresult.ipynb`
- For demo please extract the `pretrained.7z` archive
- For testing the prototype PPO-KL RL fine-tuning, try running `train_ppokl_selfies.py` on the pretrained model (please make sure the model location is correct)
Tip: feel free to play around with the ChemQ3Model and its training loop/configs!
The sample dataset is included so you can experiment with it~ especially if you have better compute than mine, feel free to share your results in discussion
## To-Do
- [x] Adjust FastChemTokenizer tokenizer on new data
- [x] Experimenting with early architecture
- [x] Write initial readme
- [x] Upload backbone and MTP train code
- [x] Demo training on 14K data (only 1 epoch, adding another on this data led to slight overfitting)
- [x] Upload the warmup model
- [x] Tidy up and upload JupyterNotebook(s) train/demo along with sample data
- **[ongoing]** Review, clean, and test codes
- [x] Pretraining again after auditing/reviewing the base code
- [x] Test RL code
- [x] Train for 1000 steps for max token length = 80
- [x] Upload RL-trained demo model
- [ ] Ablation studies
- [ ] Implement HF Automodel compatible modules if performance benefit(s) confirmed
- [ ] Complete pretraining on all ~3M dataset (when possible)
- [ ] Chunk I
- [ ] Chunk II
- [ ] Chunk III
- [ ] Chunk IV
- [ ] Publish complete pretraining on GitHub and HF (if compatible)
- [ ] Complete RL fine-tuning on verified rewards system.
---
## 📁 Project Structure
```
ChemMiniQ3-HoriFIE/
├── ChemQ3MTP.py # Custom model definition
|── train-withmtp.py # Main trainer for MTP with curriculum training combining NTP with MTP
|── config.json # Configuration for model definition and training
|── FastChemTokenizer.py # FastChemTokenizer module
|── train_ppokl_selfies.py # Prototype PPO-KL RL training script
├── README.md
├── requirements.txt # I'd recommend making a conda env for this or you could try using different
versions and please note if you encounter a bug
└── selftok_core # FastChemTokenizer: SELFIES core used for this model, you can try _wtails
if you want to experiment
└── pretrained/
└── sample-e1/ # Pre-trained weights on sample 14k dataset, 1st epoch
└── sample-RL/
└── demo_test_mtpresult.ipynb # Demo script for generating SELFIES using pretrained model
└── log_train.txt # Pre-training console outputs on MTP train
└── data/ # 14k samples from combined dataset
```
---
## 🔧 Contributing
This project is a **learning experiment** — all contributions are welcome!
- 🧠 Have a better way to implement the methods?
- 📊 Want to add evaluation metrics?
- ✨ Found a bug? Please open an issue!
👉 Please:
- Keep changes minimal and focused.
- Add comments if you change core logic.
---
## ⚠️ Disclaimer
> **This is NOT a production model.**
>
> - Built during late-night prototyping sessions 🌙
> - Not thoroughly validated or benchmarked due to compute constraint
> - Some components are heuristic and unproven
> - May crash, overfit, or generate nonsense (especially outside molecular data)
> - I’m still learning PyTorch, attention mechanisms, and transformer internals
>
> Use this code to learn and experiment — **not to deploy**.
## 📜 License
MIT
## ❤️ Acknowledgments
Based and Inspired by:
- https://github.com/KellerJordan/modded-nanogpt
- https://huggingface.co/docs/transformers/en/model_doc/t5gemma
- https://github.com/aspuru-guzik-group/selfies/
- https://github.com/lessw2020/Ranger21
- https://huggingface.co/gbyuvd/chemfie-gpt-experiment-1
- https://huggingface.co/gbyuvd/bionat-selfies-gen-tokenizer-wordlevel
- Old ChemZiRo-GPT experiment with adding RoPE, GQA, MTP, RMSProp to backbone GPT2 architecture
## References
### BibTeX
#### Qwen3
```bibtex
@misc{qwen3technicalreport,
title={Qwen3 Technical Report},
author={Qwen Team},
year={2025},
eprint={2505.09388},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2505.09388},
}
```
#### COCONUTDB
```bibtex
@article{sorokina2021coconut,
title={COCONUT online: Collection of Open Natural Products database},
author={Sorokina, Maria and Merseburger, Peter and Rajan, Kohulan and Yirik, Mehmet Aziz and Steinbeck, Christoph},
journal={Journal of Cheminformatics},
volume={13},
number={1},
pages={2},
year={2021},
doi={10.1186/s13321-020-00478-9}
}
```
#### ChemBL34
```bibtex
@article{zdrazil2023chembl,
title={The ChEMBL Database in 2023: a drug discovery platform spanning multiple bioactivity data types and time periods},
author={Zdrazil, Barbara and Felix, Eloy and Hunter, Fiona and Manners, Emma J and Blackshaw, James and Corbett, Sybilla and de Veij, Marleen and Ioannidis, Harris and Lopez, David Mendez and Mosquera, Juan F and Magarinos, Maria Paula and Bosc, Nicolas and Arcila, Ricardo and Kizil{\"o}ren, Tevfik and Gaulton, Anna and Bento, A Patr{\'i}cia and Adasme, Melissa F and Monecke, Peter and Landrum, Gregory A and Leach, Andrew R},
journal={Nucleic Acids Research},
year={2023},
volume={gkad1004},
doi={10.1093/nar/gkad1004}
}
@misc{chembl34,
title={ChemBL34},
year={2023},
doi={10.6019/CHEMBL.database.34}
}
```
#### SuperNatural3
```bibtex
@article{Gallo2023,
author = {Gallo, K and Kemmler, E and Goede, A and Becker, F and Dunkel, M and Preissner, R and Banerjee, P},
title = {{SuperNatural 3.0-a database of natural products and natural product-based derivatives}},
journal = {Nucleic Acids Research},
year = {2023},
month = jan,
day = {6},
volume = {51},
number = {D1},
pages = {D654-D659},
doi = {10.1093/nar/gkac1008}
}
```
### Ranger21 Optimizer
``` bibtex
@article{wright2021ranger21,
title={Ranger21: a synergistic deep learning optimizer},
author={Wright, Less and Demeure, Nestor},
year={2021},
journal={arXiv preprint arXiv:2106.13731},
}
|
onnxmodelzoo/xcit_small_24_p8_224_dist_Opset18
|
onnxmodelzoo
| 2025-09-24T20:18:24 | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"skip",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-24T20:18:13 |
---
language: en
license: apache-2.0
model_name: xcit_small_24_p8_224_dist_Opset18.onnx
tags:
- Computer_Vision
- skip
---
|
BigRay0x/Qwen3-0.6B-Gensyn-Swarm-moist_dense_mole
|
BigRay0x
| 2025-09-24T20:13:51 | 120 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am moist_dense_mole",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-19T14:19:01 |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am moist_dense_mole
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
100Pudoff/Qwen3-0.6B-Gensyn-Swarm-pensive_large_clam
|
100Pudoff
| 2025-09-24T20:08:31 | 3 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am pensive_large_clam",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-18T15:26:15 |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am pensive_large_clam
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
krasovskiy91/blockassist
|
krasovskiy91
| 2025-09-24T20:05:44 | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"scurrying flapping turkey",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-24T20:05:38 |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- scurrying flapping turkey
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
0408happyfeet/bracelet-detector-automl
|
0408happyfeet
| 2025-09-24T20:04:20 | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-24T15:24:27 |
# Bracelet vs. Not-Bracelet — AutoML (24-679)
Compact image classifier trained with [AutoGluon MultiModal](https://auto.gluon.ai/) using Ray Tune (ASHA early stopping).
This model predicts whether an image **contains a bracelet**.
## Dataset
- Source: [`samder03/2025-24679-image-dataset`](https://huggingface.co/datasets/samder03/2025-24679-image-dataset)
- Task: **Binary image classification** (bracelet vs. not-bracelet)
- Splits used here: train≈70%, val≈20%, test≈10% — test from **augmented (hold-out 10%)**.
## Training / AutoML
- Library: AutoGluon MultiModal (`MultiModalPredictor`)
- Search space: `mobilenetv3_small_100`, `efficientnet_b0`, `resnet18`, `swin_tiny_patch4_window7_224`, `deit_tiny_patch16_224`
- Tuned hparams: learning rate, weight decay, batch size, max epochs; ASHA for early stopping
- **Compute budget**: trials = 12, time_limit = None
- Seed: 42
### Best trial (as recorded)
```
{}
```
## Results (test on `original` split)
```
{
"has bracelet mod": {
"precision": 1.0,
"recall": 1.0,
"f1-score": 1.0,
"support": 14.0
},
"no bracelet mod": {
"precision": 1.0,
"recall": 1.0,
"f1-score": 1.0,
"support": 19.0
},
"accuracy": 1.0,
"macro avg": {
"precision": 1.0,
"recall": 1.0,
"f1-score": 1.0,
"support": 33.0
},
"weighted avg": {
"precision": 1.0,
"recall": 1.0,
"f1-score": 1.0,
"support": 33.0
}
}
```
## Usage
Download the zip artifact and load locally:
```python
from autogluon.multimodal import MultiModalPredictor
import pandas as pd
predictor = MultiModalPredictor.load("predictor_native") # unzip content to ./predictor_native first
# Predict a few images
df = pd.DataFrame({"image": ["path/to/image1.png", "path/to/image2.png"]})
preds = predictor.predict(df)
proba = predictor.predict_proba(df)
print(preds, proba.head())
```
---
_Trained in a class assignment (24-679). Dataset license: MIT (see dataset card)._
|
ucfc2024/lindatatiana398
|
ucfc2024
| 2025-09-24T19:57:14 | 0 | 0 | null |
[
"license:other",
"region:us"
] | null | 2025-09-24T19:15:16 |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
|
pokipolii/mistral-ko-merged
|
pokipolii
| 2025-09-24T19:42:35 | 0 | 0 | null |
[
"safetensors",
"mistral",
"korean",
"lora",
"merged",
"text-generation",
"ko",
"en",
"base_model:mistralai/Mistral-7B-v0.1",
"base_model:adapter:mistralai/Mistral-7B-v0.1",
"license:apache-2.0",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-09-24T19:30:18 |
---
license: apache-2.0
language:
- ko
- en
base_model: mistralai/Mistral-7B-v0.1
pipeline_tag: text-generation
tags:
- mistral
- korean
- lora
- merged
---
|
ziadtarek12/whisper-arabic-gulf_msa-seed_168-peft
|
ziadtarek12
| 2025-09-24T19:41:40 | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-24T19:41:34 |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
zhnuc/high_quality_sft_12B
|
zhnuc
| 2025-09-24T19:38:40 | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"mistral",
"text-generation",
"llama-factory",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-24T19:35:10 |
---
library_name: transformers
tags:
- llama-factory
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
pepijn223/pi05_libero_new_50
|
pepijn223
| 2025-09-24T19:21:51 | 0 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"pi05",
"dataset:HuggingFaceVLA/libero",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-24T18:56:52 |
---
datasets: HuggingFaceVLA/libero
library_name: lerobot
license: apache-2.0
model_name: pi05
pipeline_tag: robotics
tags:
- lerobot
- robotics
- pi05
---
# Model Card for pi05
<!-- Provide a quick summary of what the model is/does. -->
**π₀.₅ (Pi05) Policy**
π₀.₅ is a Vision-Language-Action model with open-world generalization, from Physical Intelligence. The LeRobot implementation is adapted from their open source OpenPI repository.
**Model Overview**
π₀.₅ represents a significant evolution from π₀, developed by Physical Intelligence to address a big challenge in robotics: open-world generalization. While robots can perform impressive tasks in controlled environments, π₀.₅ is designed to generalize to entirely new environments and situations that were never seen during training.
For more details, see the [Physical Intelligence π₀.₅ blog post](https://www.physicalintelligence.company/blog/pi05).
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
TTKhosa/bio-mistral-tb-qna-E8
|
TTKhosa
| 2025-09-24T19:08:18 | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-24T19:08:15 |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
lieding1994/promptgen-base-onnx
|
lieding1994
| 2025-09-24T19:04:46 | 0 | 0 |
transformers.js
|
[
"transformers.js",
"onnx",
"florence2",
"image-text-to-text",
"vision",
"text-generation",
"text2text-generation",
"image-to-text",
"base_model:microsoft/Florence-2-base",
"base_model:quantized:microsoft/Florence-2-base",
"license:mit",
"region:us"
] |
image-text-to-text
| 2025-09-24T18:29:02 |
---
base_model: microsoft/Florence-2-base
library_name: transformers.js
license: mit
pipeline_tag: image-text-to-text
tags:
- vision
- text-generation
- text2text-generation
- image-to-text
---
https://huggingface.co/microsoft/Florence-2-base with ONNX weights to be compatible with Transformers.js.
## Usage (Transformers.js)
If you haven't already, you can install the [Transformers.js](https://huggingface.co/docs/transformers.js) JavaScript library from [NPM](https://www.npmjs.com/package/@huggingface/transformers) using:
```bash
npm i @huggingface/transformers
```
**Example:** Perform image captioning with `onnx-community/Florence-2-base`.
```js
import {
Florence2ForConditionalGeneration,
AutoProcessor,
load_image,
} from '@huggingface/transformers';
// Load model, processor, and tokenizer
const model_id = 'onnx-community/Florence-2-base';
const model = await Florence2ForConditionalGeneration.from_pretrained(model_id, { dtype: 'fp32' });
const processor = await AutoProcessor.from_pretrained(model_id);
// Load image and prepare vision inputs
const url = 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/tasks/car.jpg';
const image = await load_image(url);
// Specify task and prepare text inputs
const task = '<MORE_DETAILED_CAPTION>';
const prompts = processor.construct_prompts(task);
// Pre-process the image and text inputs
const inputs = await processor(image, prompts);
// Generate text
const generated_ids = await model.generate({
...inputs,
max_new_tokens: 100,
});
// Decode generated text
const generated_text = processor.batch_decode(generated_ids, { skip_special_tokens: false })[0];
// Post-process the generated text
const result = processor.post_process_generation(generated_text, task, image.size);
console.log(result);
// { '<MORE_DETAILED_CAPTION>': 'The image shows a vintage Volkswagen Beetle car parked on a cobblestone street in front of a yellow building with two wooden doors. The car is a light green color with silver rims and appears to be in good condition. The building has a sloping roof and is painted in a combination of yellow and beige colors. The sky is blue and there are trees in the background. The overall mood of the image is peaceful and serene.' }
```
We also released an online demo, which you can try yourself: https://huggingface.co/spaces/Xenova/florence2-webgpu
<video controls autoplay src="https://cdn-uploads.huggingface.co/production/uploads/61b253b7ac5ecaae3d1efe0c/BJj3jQXNqS_7Nt2MSb2ss.mp4"></video>
---
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`).
|
BootesVoid/cmfyat5qn0hjkx0n0pyy2y7jk_cmfyay92m0hjsx0n01uwjkfch
|
BootesVoid
| 2025-09-24T18:59:29 | 0 | 0 |
diffusers
|
[
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] |
text-to-image
| 2025-09-24T18:59:28 |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: MADI
---
# Cmfyat5Qn0Hjkx0N0Pyy2Y7Jk_Cmfyay92M0Hjsx0N01Uwjkfch
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `MADI` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "MADI",
"lora_weights": "https://huggingface.co/BootesVoid/cmfyat5qn0hjkx0n0pyy2y7jk_cmfyay92m0hjsx0n01uwjkfch/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('BootesVoid/cmfyat5qn0hjkx0n0pyy2y7jk_cmfyay92m0hjsx0n01uwjkfch', weight_name='lora.safetensors')
image = pipeline('MADI').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2500
- Learning rate: 9e-05
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/BootesVoid/cmfyat5qn0hjkx0n0pyy2y7jk_cmfyay92m0hjsx0n01uwjkfch/discussions) to add images that show off what you’ve made with this LoRA.
|
sasawq21/test-20250924-185019
|
sasawq21
| 2025-09-24T18:56:17 | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:google/medgemma-4b-it",
"base_model:finetune:google/medgemma-4b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-09-24T18:50:25 |
---
base_model: google/medgemma-4b-it
library_name: transformers
model_name: test-20250924-185019
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for test-20250924-185019
This model is a fine-tuned version of [google/medgemma-4b-it](https://huggingface.co/google/medgemma-4b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="sasawq21/test-20250924-185019", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/rustin_r-the-university-of-texas-at-austin/chimera_medgemma-intern-0924/runs/xii56oj7)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.0
- Transformers: 4.52.4
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Kijai/WanVideo_comfy_fp8_scaled
|
Kijai
| 2025-09-24T18:54:50 | 333,720 | 253 |
diffusion-single-file
|
[
"diffusion-single-file",
"comfyui",
"base_model:Wan-AI/Wan2.1-VACE-1.3B",
"base_model:finetune:Wan-AI/Wan2.1-VACE-1.3B",
"license:apache-2.0",
"region:us"
] | null | 2025-07-22T10:39:42 |
---
tags:
- diffusion-single-file
- comfyui
license: apache-2.0
base_model:
- Wan-AI/Wan2.1-VACE-14B
- Wan-AI/Wan2.1-VACE-1.3B
---
Better fp8 scaled models (when measured against fp16) based on quantization code from https://github.com/Tencent-Hunyuan/HunyuanVideo/blob/main/hyvideo/modules/fp8_optimization.py
Can be used with: https://github.com/kijai/ComfyUI-WanVideoWrapper (latest version) and ComfyUI native WanVideo nodes.
14B-T2V comparison test without LoRAs, 25 steps, 832x480x81
---
<video controls autoplay width=50% src=https://cdn-uploads.huggingface.co/production/uploads/63297908f0b2fc94904a65b8/DwlAGbj20it1unZW54NDC.mp4></video>
2.2 A14B-T2V test
---
<video controls autoplay width=50% src=https://cdn-uploads.huggingface.co/production/uploads/63297908f0b2fc94904a65b8/6A_AZ7GN_uxeRH0vwsWkH.mp4></video>
<video controls autoplay width=50% src=https://cdn-uploads.huggingface.co/production/uploads/63297908f0b2fc94904a65b8/GpuqQ4YwoR3kjxkhuvP8P.mp4></video>
The e5m2 marked as v2 is the one uploaded here and these are all scaled even if I forgot to label properly.
|
divyanshsharma5c21/sakhi-ai-gemma-2b
|
divyanshsharma5c21
| 2025-09-24T18:44:05 | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-24T18:39:14 |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
butterlabs/blitz-air-one
|
butterlabs
| 2025-09-24T18:38:27 | 0 | 0 | null |
[
"gguf",
"en",
"dataset:butterlabs/blitz-air-1-dataset",
"base_model:Qwen/Qwen3-0.6B",
"base_model:quantized:Qwen/Qwen3-0.6B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-24T18:35:21 |
---
license: apache-2.0
datasets:
- butterlabs/blitz-air-1-dataset
language:
- en
base_model:
- Qwen/Qwen3-0.6B
---
|
onnxmodelzoo/xcit_nano_12_p8_224_Opset18
|
onnxmodelzoo
| 2025-09-24T18:35:44 | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"skip",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-24T18:35:40 |
---
language: en
license: apache-2.0
model_name: xcit_nano_12_p8_224_Opset18.onnx
tags:
- Computer_Vision
- skip
---
|
onnxmodelzoo/xcit_nano_12_p8_224_dist_Opset17
|
onnxmodelzoo
| 2025-09-24T18:35:26 | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"skip",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-24T18:35:23 |
---
language: en
license: apache-2.0
model_name: xcit_nano_12_p8_224_dist_Opset17.onnx
tags:
- Computer_Vision
- skip
---
|
Anwaarma/edos_taskb_llama3b_lora
|
Anwaarma
| 2025-09-24T18:29:20 | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:adapter:meta-llama/Llama-3.2-3B",
"lora",
"transformers",
"base_model:meta-llama/Llama-3.2-3B",
"license:llama3.2",
"region:us"
] | null | 2025-09-24T17:51:06 |
---
library_name: peft
license: llama3.2
base_model: meta-llama/Llama-3.2-3B
tags:
- base_model:adapter:meta-llama/Llama-3.2-3B
- lora
- transformers
metrics:
- accuracy
model-index:
- name: edos_taskb_llama3b_lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# edos_taskb_llama3b_lora
This model is a fine-tuned version of [meta-llama/Llama-3.2-3B](https://huggingface.co/meta-llama/Llama-3.2-3B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9839
- Accuracy: 0.5660
- F1 Macro: 0.5362
- F1 Micro: 0.5660
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00015
- train_batch_size: 32
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.06
- num_epochs: 15
- label_smoothing_factor: 0.02
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 Macro | F1 Micro |
|:-------------:|:-------:|:----:|:---------------:|:--------:|:--------:|:--------:|
| 2.2913 | 1.8598 | 100 | 1.0997 | 0.5144 | 0.4213 | 0.5144 |
| 2.005 | 3.7103 | 200 | 1.0233 | 0.5309 | 0.4693 | 0.5309 |
| 1.8166 | 5.5607 | 300 | 0.9849 | 0.5700 | 0.4826 | 0.5700 |
| 1.7241 | 7.4112 | 400 | 0.9637 | 0.5679 | 0.5349 | 0.5679 |
| 1.6516 | 9.2617 | 500 | 0.9544 | 0.5802 | 0.4941 | 0.5802 |
| 1.6584 | 11.1121 | 600 | 0.9481 | 0.5905 | 0.5182 | 0.5905 |
| 1.6708 | 12.9720 | 700 | 0.9471 | 0.5844 | 0.5152 | 0.5844 |
### Framework versions
- PEFT 0.17.1
- Transformers 4.56.2
- Pytorch 2.8.0+cu126
- Datasets 4.1.1
- Tokenizers 0.22.0
|
onnxmodelzoo/xcit_large_24_p16_384_dist_Opset17
|
onnxmodelzoo
| 2025-09-24T18:25:48 | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"skip",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-24T18:25:12 |
---
language: en
license: apache-2.0
model_name: xcit_large_24_p16_384_dist_Opset17.onnx
tags:
- Computer_Vision
- skip
---
|
onnxmodelzoo/xcit_large_24_p16_224_dist_Opset16
|
onnxmodelzoo
| 2025-09-24T18:23:44 | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"skip",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-24T18:23:03 |
---
language: en
license: apache-2.0
model_name: xcit_large_24_p16_224_dist_Opset16.onnx
tags:
- Computer_Vision
- skip
---
|
onnxmodelzoo/vit_small_patch16_224_dino_Opset16
|
onnxmodelzoo
| 2025-09-24T18:20:47 | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"skip",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-24T18:20:39 |
---
language: en
license: apache-2.0
model_name: vit_small_patch16_224_dino_Opset16.onnx
tags:
- Computer_Vision
- skip
---
|
lmq1909/Qwen2.5-1.5B-sft-2e
|
lmq1909
| 2025-09-24T18:17:15 | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"conversational",
"en",
"base_model:lmq1909/Qwen2.5-1.5B-continued-prertraining-4e",
"base_model:quantized:lmq1909/Qwen2.5-1.5B-continued-prertraining-4e",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] |
text-generation
| 2025-09-24T18:16:55 |
---
base_model: lmq1909/Qwen2.5-1.5B-continued-prertraining-4e
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
license: apache-2.0
language:
- en
---
# Uploaded finetuned model
- **Developed by:** lmq1909
- **License:** apache-2.0
- **Finetuned from model :** lmq1909/Qwen2.5-1.5B-continued-prertraining-4e
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
meituan-longcat/LongCat-Flash-Thinking
|
meituan-longcat
| 2025-09-24T06:26:25 | 115 | 107 |
LongCat-Flash-Chat
|
[
"LongCat-Flash-Chat",
"safetensors",
"text-generation",
"transformers",
"conversational",
"custom_code",
"arxiv:2509.18883",
"license:mit",
"region:us"
] |
text-generation
| 2025-09-21T07:46:09 |
---
license: mit
library_name: LongCat-Flash-Chat
pipeline_tag: text-generation
tags:
- transformers
---
# LongCat-Flash-Thinking
<div align="center">
<img src="https://raw.githubusercontent.com/meituan-longcat/LongCat-Flash-Chat/main/figures/longcat_logo.svg" width="45%" alt="LongCat-Flash" />
</div>
<hr>
<div align="center" style="line-height: 1;">
<a href="https://longcat.ai/" target="_blank" style="margin: 2px;">
<img alt="Chat" src="https://img.shields.io/badge/🤖%20Chat-LongCat--Flash--Thinking-ADFF2F?color=29E154&logoColor=white" fill-opacity="1" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://github.com/meituan-longcat/LongCat-Flash-Thinking">
<img alt="github" src="https://img.shields.io/badge/🤖%20Github-LongCat--Flash--Thinking-ff6b6b?color=1783ff&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://github.com/meituan-longcat/LongCat-Flash-Thinking/blob/main/figures/wechat_official_accounts.png" target="_blank" style="margin: 2px;">
<img alt="Wechat" src="https://img.shields.io/badge/WeChat-LongCat-brightgreen?logo=wechat&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://x.com/Meituan_LongCat" target="_blank" style="margin: 2px;">
<img alt="Twitter Follow" src="https://img.shields.io/badge/Twitter-LongCat-white?logo=x&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<div align="center" style="line-height: 1;">
<a href="https://huggingface.co/meituan-longcat/LongCat-Flash-Thinking/blob/main/LICENSE" style="margin: 2px;">
<img alt="License" src="https://img.shields.io/badge/License-MIT-f5de53?&color=f5de53" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
<p align="center">
<a href="https://arxiv.org/abs/2509.18883"><b>Tech Report</b> 📄</a>
</p>
## Model Introduction
We introduce and release **LongCat-Flash-Thinking**, which is a powerful and efficient large reasoning model (LRM) with 560 billion total parameters, featuring an innovative Mixture-of-Experts (MoE) architecture. The model incorporates a dynamic computation mechanism that activates 18.6B∼31.3B parameters (averaging∼27B) based on contextual demands, optimizing both computational efficiency and performance. LongCat-Flash-Thinking is developed by our DORA system, which is an efficient distributed RL framework that supports asynchronous training and flexible accelerator usage to ensure stability and efficiency. Our comprehensive data curation and domain-parallel training recipe ensures stable and efficient training. In addition to general reasoning, the model is also equipped with techniques of formal reasoning and agentic reasoning, advancing the LRMs' reasoning ability on diverse complex tasks such as mathematics, logic, programming, automatic theorem proving, and tool use.
Specifically, the development of LongCat-Flash-Thinking follows a two-phase pipeline:
- **Long CoT Cold-Start Training**: This phase aims to cultivate the model's foundational reasoning abilities.
This begins with a curriculum learning strategy during mid-training to bolster intrinsic capabilities, followed by a SFT stage on reasoning-intensive and agentic data to prepare the model for advanced learning.
- **Large-Scale RL**: The second phase scales up this potential through an efficient RL framework, built upon our Dynamic Orchestration for Asynchronous Rollout (DORA) system for industrial-scale asynchronous training.
To address the stability challenges in asynchronous RL training, we adapt and extend the GRPO algorithm for a robust exploration-exploitation balance. A key innovation in this phase is our domain-parallel training scheme, which simultaneously optimizes the model across distinct domains and subsequently merges the resulting domain-expert models into a fused model. Finally, we perform a general RL stage to further refine the fused model and enhance its robustness, safety, and human alignment ability.
### Key Features
#### 🌟 Domain-Parallel RL Training Methodology
To overcome the instability of traditional mixed-domain RL training, LongCat-Flash-Thinking incorporates a domain-parallel training scheme that decouples optimization across STEM, coding, and agentic tasks.
This approach not only stabilizes training, but also allows to fuse the resulting domain-expert models into a nearly Pareto-optimal final model that excels across all specialties.
#### 🌟 Pioneering RL Infrastructure
LongCat-Flash-Thinking is built upon our self-designed DORA system.
The main motivation is to optimize long-tail generation by leveraging multiple old versions of the Actor model through streaming rollout while keeping sampling consistency.
DORA system consists of two core components, such as elastic colocation and multi-version asynchronous pipeline. These components aim to enhance training efficiency, ensure policy consistency per sample, and further enable efficient KV-cache reuse, facilitating stable and scalable training on tens of thousands of accelerators.
#### 🌟 Advancing Formal Reasoning and Agentic Reasoning
In addition to general reasoning (e.g., mathematics, logic, coding, instruction-following, etc.), LongCat-Flash-Thinking also emphasizes two other critical capabilities.
- **Formal Reasoning**: LongCat-Flash-Thinking can solve complex formal reasoning tasks, e.g., automatic theorem proving. To help realize this potential and empower researchers, we introduce significant enhancements to our model's formal reasoning capabilities.
To achieve this, we introduce a novel expert iteration framework for careful data synthesis, involving statement formalization, iterative proof synthesis, and syntax/consistency filtering.
- **Agentic Reasoning**: LongCat-Flash-Thinking can adaptively utilize provided tools to solve complex reasoning tasks. To reach this goal, we introduce a dual-path reasoning approach to identify and retain high-quality queries that genuinely require tool assistance, thereby fostering the development of robust agentic abilities.
After high-value query selection, we synthesize corresponding high-quality
solution trajectories based on a versatile environment with diverse tool APIs,
including MCP servers and simulated tools for both single and multi-turn interactions.
For more details, please refer to the comprehensive [**LongCat-Flash-Thinking Technical Report**](https://arxiv.org/abs/2509.18883).
## Evaluation Results
| **Benchmark** | DeepSeek-V3.1-Thinking | Qwen3-235B-A22B-Thinking-2507 | GLM-4.5 | OpenAI-o3 | Gemini2.5-Pro | GPT-5-Thinking | LongCat-Flash-Thinking |
|---------------|-------------------------|------------------------------|--------|-----------|---------------|----------------|-------------------------|
| Architecture | MoE | MoE | MoE | - | - | - | MoE |
| \# Total Params | 671B | 235B | 355B | - | - | - | 560B |
| \# Activated Params | 37B | 22B | 32B | - | - | - | 27B |
| **General QA** | | | | | | | |
| MMLU-Pro<sub>(acc)</sub> | 84.4 | 84.4 | 81.5 | 85.3 | 86.7 | 84.5 | 82.6 |
| MMLU-Redux<sub>(acc)</sub> | 90.5 | 91.4 | 89.9 | 93.1 | 90.1 | 92.6 | 89.3 |
| **Alignment** | | | | | | | |
| IFEval<sub>(strict prompt)</sub> | 86.3 | 89.3 | 85.4 | 90.2 | 92.4 | 92.8 | 86.9 |
| Arena-Hard<sub>(hard prompt gemini)</sub> | 57.1 | 74.5 | 67.7 | 87.1 | 87.1 | 87.7 | 69.9 |
| **Mathematical Reasoning** | | | | | | | |
| MATH500<sub>(Mean@1)</sub> | 98.8 | 99.6 | 95.4 | 98.4 | 98.0 | 99.2 | 99.2 |
| HMMT25<sub>(Mean@32)</sub> | 80.4 | 83.8 | 76.3 | 71.9 | 79.3 | 84.8 | 83.7 |
| AIME24<sub>(Mean@32)</sub> | 93.9 | 93.9 | 89.3 | 91.6* | 90.7 | 92.0 | 93.3 |
| AIME25<sub>(Mean@32)</sub> | 87.9 | 92.5 | 85.5 | 88.9* | 89.2 | 94.6* | 90.6 |
| BeyondAIME<sub>(Mean@10)</sub> | 71.8 | 71.5 | 66.0 | 63.2 | 63.0 | 70.0 | 69.5 |
| **General Reasoning** | | | | | | | |
| GPQA-Diamond<sub>(Mean@16)</sub> | 84.2 | 80.4 | 78.3 | 81.9 | 84.0 | 84.4 | 81.5 |
| ZebraLogic<sub>(Mean@1)</sub> | 96.1 | 97.5 | 90.9 | 94.3 | 92.4 | 92.7 | 95.5 |
| Sudoku-Bench<sub>(Mean@1)</sub> | 1.0 | 2.0 | 1.0 | 70.0 | 0.0 | 63.0 | 56.0 |
| ARC-AGI<sub>(Mean@1)</sub> | 37.5 | 45.3 | 21.41 | 47.3 | 46.8 | 59.0 | 50.3 |
| **Coding** | | | | | | | |
| LiveCodeBench<sub>(Mean@4)</sub> | 73.5 | 75.4 | 61.1 | 76.2 | 74.2 | 80.6 | 79.4 |
| OJBench<sub>(Mean@1)</sub> | 33.6 | 32.1 | 19.0 | 38.4 | 41.6 | 34.1 | 40.7 |
| **Agentic Tool Using** | | | | | | | |
| SWE-Bench<sub>(Pass@1)</sub> | 66.0* | 34.4 | 64.2* | 69.1* | 59.6* | 74.9* | 59.4 |
| BFCL V3<sub>(full)</sub> | 55.4 | 75.7 | 79.1 | 72.4* | 63.2 | 60.1 | 74.4 |
| τ²-Bench-Retail<sub>(Mean@4)</sub> | 65.4 | 68.2 | 69.3 | 72.8 | 70.9 | 81.1* | 71.5 |
| τ²-Bench-Airline<sub>(Mean@4)</sub> | 44.0 | 58.0 | 66.0 | 62.5 | 58.0 | 62.6* | 67.5 |
| τ²-Bench-Telecom<sub>(Mean@4)</sub> | 23.7 | 47.3 | 56.1 | 67.5 | 38.3 | 96.7* | 83.1 |
| VitaBench | 13.5 | 21.5 | 26.8 | 35.3 | 24.3 | 29.3 | 29.5 |
| **Formal Theorem Proving** | | | | | | | |
| MiniF2F-Test<sub>(Pass@1)</sub> | 49.6 | 11.9 | 10.9 | 15.2 | 13.9 | 21.4 | 67.6 |
| MiniF2F-Test<sub>(Pass@8)</sub> | 74.4 | 20.9 | 22.1 | 29.6 | 29.4 | 39.7 | 79.4 |
| MiniF2F-Test<sub>(Pass@32)</sub> | 79.5 | 26.6 | 27.0 | 37.7 | 41.8 | 51.2 | 81.6 |
| **Safety** | | | | | | | |
| Harmful | 79.2 | 84.3 | 70.4 | 64.8 | 44.3 | 56.8 | 93.7 |
| Criminal | 89.7 | 92.7 | 88.8 | 85.7 | 77.4 | 87.3 | 97.1 |
| Misinformation | 81.1 | 80.9 | 67.1 | 42.7 | 31.0 | 41.9 | 93.0 |
| Privacy | 96.2 | 100.0 | 97.6 | 100.0 | 95.0 | 98.8 | 98.8 |
Note:
- Values marked with * are sourced from other public reports.
- The inference parameters of our LongCat-Flash-Thinking are set as `temperature=1.0`, `topk=-1`, and `topp=0.95`.
## Quick Start
### Chat Template
The details of our chat template are provided in the `tokenizer_config.json` file. Below are some examples.
#### First-Turn
With the following prefix, LongCat-Flash can generate responses corresponding to user queries:
```
[Round 0] USER:{query} /think_on ASSISTANT:
```
When a system prompt is specified, the prefix will take the following format:
```
SYSTEM:{system_prompt} [Round 0] USER:{query} /think_on ASSISTANT:
```
#### Multi-Turn
In multi-turn scenarios, the prefix is constructed by concatenating the context with the latest user query:
```
SYSTEM:{system_prompt} [Round 0] USER:{query} /think_on ASSISTANT:{response}... [Round N-1] USER:{query} /think_on ASSISTANT:{response} [Round N] USER:{query} /think_on ASSISTANT:
```
Here, N denotes the N-th round of user queries, with indexing starting from zero.
#### ToolCall
LongCat-Flash supports tool calling in the following format:
```
{tool_description}
## Messages
SYSTEM:{system_prompt} [Round 0] USER:{query} /think_on ASSISTANT:
```
The tool_description is:
```markdown
## Tools
You have access to the following tools:
### Tool namespace: function
#### Tool name: {func.name}
Description: {func.description}
InputSchema:
{json.dumps(func.parameters, indent=2)}
**Note**: For each function call, return a json object with function name and arguments within <longcat_tool_call></longcat_tool_call> XML tags as follows:
<longcat_tool_call>
{"name": <function-name>, "arguments": <args-dict>}
</longcat_tool_call>
When multiple functions need to be called simultaneously, each function call should be wrapped in its own <longcat_tool_call> tag and placed consecutively. For example:
<longcat_tool_call>
{"name": <function-name>, "arguments": <args-dict>}
</longcat_tool_call><longcat_tool_call>
{"name": <function-name>, "arguments": <args-dict>}
</longcat_tool_call>
```
#### Mathematical Reasoning
We recommend adding the following instructions when solving mathematical or other STEM-related reasoning tasks, so that the output results can be located for evaluation.
```text
[Round 0] USER:{problem}
Please reason step by step, and put your final answer within \\boxed{}. /think_on ASSISTANT:
```
#### Formal Reasoning
LongCat-Flash-Thinking also supports formal reasoning, like automatic theorem proving (ATP). The specific template is:
```text
[Round 0] USER:Think about and solve the following problem step by step in Lean 4.
# Problem:{problem}
# Formal statement:{formal_statement}
/think_on ASSISTANT:
```
## Deployment
We have implemented basic adaptations in both SGLang and vLLM to support the deployment of LongCat-Flash-Thinking. Please refer to the [Deployment Guide](https://github.com/meituan-longcat/LongCat-Flash-Thinking/blob/main/docs/deployment_guide.md) for detailed deployment instructions.
## Chat Website
You can chat with LongCat-Flash-Thinking on our official website: [https://longcat.ai](https://longcat.ai).
Please turn on the button "Think" ("深度思考" in Chinese) before submitting your request.
## License Agreement
The **model weights** are released under the **MIT License**.
Any contributions to this repository are licensed under the MIT License, unless otherwise stated. This license does not grant any rights to use Meituan trademarks or patents.
See the [LICENSE](LICENSE) file for the full license text.
## Usage Considerations
This model has not been specifically designed or comprehensively evaluated for every possible downstream application.
Developers should take into account the known limitations of large language models, including performance variations across different languages, and carefully assess accuracy, safety, and fairness before deploying the model in sensitive or high-risk scenarios.
It is the responsibility of developers and downstream users to understand and comply with all applicable laws and regulations relevant to their use case, including but not limited to data protection, privacy, and content safety requirements.
Nothing in this Model Card should be interpreted as altering or restricting the terms of the MIT License under which the model is released.
## Citation
We kindly encourage citation of our work if you find it useful.
```
@misc{meituan2025longcatflashthinkingtechnicalreport,
title={LongCat-Flash-Thinking Technical Report},
author={Meituan},
year={2025},
eprint={2509.18883},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2509.18883},
}
```
## Contact
Please contact us at <a href="mailto:longcat-team@meituan.com">longcat-team@meituan.com</a> or join our WeChat Group if you have any questions.
|
Pheyji/AceInstruct-1.5B-Gensyn-Swarm-scented_silent_ladybug
|
Pheyji
| 2025-09-24T06:25:13 | 4 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am scented_silent_ladybug",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T01:37:53 |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am scented_silent_ladybug
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
iamzac/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-graceful_reclusive_skunk
|
iamzac
| 2025-09-24T06:22:59 | 98 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am graceful_reclusive_skunk",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-08-20T03:17:47 |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am graceful_reclusive_skunk
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Alex6513/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-grazing_diving_beaver
|
Alex6513
| 2025-09-24T06:22:22 | 92 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am grazing diving beaver",
"trl",
"genrl-swarm",
"I am grazing_diving_beaver",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-04-24T19:15:55 |
---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-grazing_diving_beaver
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am grazing diving beaver
- trl
- genrl-swarm
- I am grazing_diving_beaver
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-grazing_diving_beaver
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Alex6513/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-grazing_diving_beaver", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.3
- Pytorch: 2.5.1
- Datasets: 3.5.1
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
shfpeleg/ppo-LunarLander-v3
|
shfpeleg
| 2025-09-24T06:21:50 | 0 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"LunarLander-v3",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-09-23T15:54:33 |
---
library_name: stable-baselines3
tags:
- LunarLander-v3
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v3
type: LunarLander-v3
metrics:
- type: mean_reward
value: 260.33 +/- 20.35
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v3**
This is a trained model of a **PPO** agent playing **LunarLander-v3**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
|
jerseyjerry/task-15-Qwen-Qwen2.5-3B-Instruct
|
jerseyjerry
| 2025-09-24T06:20:55 | 287 | 0 |
peft
|
[
"peft",
"safetensors",
"base_model:Qwen/Qwen2.5-3B-Instruct",
"base_model:adapter:Qwen/Qwen2.5-3B-Instruct",
"license:other",
"region:us"
] | null | 2025-09-12T12:15:40 |
---
library_name: peft
license: other
base_model: Qwen/Qwen2.5-3B-Instruct
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# lora
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
### Framework versions
- PEFT 0.15.2
- Transformers 4.48.3
- Pytorch 2.6.0+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-Adam-FisherMaskToken-1e-5-HessianMaskToken-0.01-v2_5923
|
luckeciano
| 2025-09-24T06:06:32 | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"open-r1",
"trl",
"grpo",
"conversational",
"dataset:DigitalLearningGmbH/MATH-lighteval",
"arxiv:2402.03300",
"base_model:Qwen/Qwen2.5-Math-7B",
"base_model:finetune:Qwen/Qwen2.5-Math-7B",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-24T03:26:46 |
---
base_model: Qwen/Qwen2.5-Math-7B
datasets: DigitalLearningGmbH/MATH-lighteval
library_name: transformers
model_name: Qwen-2.5-7B-GRPO-NoBaseline-Adam-FisherMaskToken-1e-5-HessianMaskToken-0.01-v2_2840
tags:
- generated_from_trainer
- open-r1
- trl
- grpo
licence: license
---
# Model Card for Qwen-2.5-7B-GRPO-NoBaseline-Adam-FisherMaskToken-1e-5-HessianMaskToken-0.01-v2_2840
This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B) on the [DigitalLearningGmbH/MATH-lighteval](https://huggingface.co/datasets/DigitalLearningGmbH/MATH-lighteval) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="luckeciano/Qwen-2.5-7B-GRPO-NoBaseline-Adam-FisherMaskToken-1e-5-HessianMaskToken-0.01-v2_2840", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/max-ent-llms/PolicyGradientStability/runs/yrfml1vp)
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.16.0.dev0
- Transformers: 4.49.0
- Pytorch: 2.5.1
- Datasets: 3.4.1
- Tokenizers: 0.21.2
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
JasonHsu0704/llama3.2_3B_news_merged
|
JasonHsu0704
| 2025-09-24T06:06:30 | 0 | 0 | null |
[
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2025-09-24T05:59:28 |
---
license: apache-2.0
---
|
Dataset Card for Hugging Face Hub Model Cards
This datasets consists of model cards for models hosted on the Hugging Face Hub. The model cards are created by the community and provide information about the model, its performance, its intended uses, and more. This dataset is updated on a daily basis and includes publicly available models on the Hugging Face Hub.
This dataset is made available to help support users wanting to work with a large number of Model Cards from the Hub. We hope that this dataset will help support research in the area of Model Cards and their use but the format of this dataset may not be useful for all use cases. If there are other features that you would like to see included in this dataset, please open a new discussion.
Dataset Details
Uses
There are a number of potential uses for this dataset including:
- text mining to find common themes in model cards
- analysis of the model card format/content
- topic modelling of model cards
- analysis of the model card metadata
- training language models on model cards
Out-of-Scope Use
[More Information Needed]
Dataset Structure
This dataset has a single split.
Dataset Creation
Curation Rationale
The dataset was created to assist people in working with model cards. In particular it was created to support research in the area of model cards and their use. It is possible to use the Hugging Face Hub API or client library to download model cards and this option may be preferable if you have a very specific use case or require a different format.
Source Data
The source data is README.md
files for models hosted on the Hugging Face Hub. We do not include any other supplementary files that may be included in the model card directory.
Data Collection and Processing
The data is downloaded using a CRON job on a daily basis.
Who are the source data producers?
The source data producers are the creators of the model cards on the Hugging Face Hub. This includes a broad variety of people from the community ranging from large companies to individual researchers. We do not gather any information about who created the model card in this repository although this information can be gathered from the Hugging Face Hub API.
Annotations [optional]
There are no additional annotations in this dataset beyond the model card content.
Annotation process
N/A
Who are the annotators?
N/A
Personal and Sensitive Information
We make no effort to anonymize the data. Whilst we don't expect the majority of model cards to contain personal or sensitive information, it is possible that some model cards may contain this information. Model cards may also link to websites or email addresses.
Bias, Risks, and Limitations
Model cards are created by the community and we do not have any control over the content of the model cards. We do not review the content of the model cards and we do not make any claims about the accuracy of the information in the model cards. Some model cards will themselves discuss bias and sometimes this is done by providing examples of bias in either the training data or the responses provided by the model. As a result this dataset may contain examples of bias.
Whilst we do not directly download any images linked to in the model cards, some model cards may include images. Some of these images may not be suitable for all audiences.
Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
Citation
No formal citation is required for this dataset but if you use this dataset in your work, please include a link to this dataset page.
Dataset Card Authors
Dataset Card Contact
- Downloads last month
- 2,021