Search is not available for this dataset
modelId
stringlengths
5
138
author
stringlengths
2
42
last_modified
unknowndate
2020-02-15 11:33:14
2025-05-04 18:27:00
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
447 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
unknowndate
2022-03-02 23:29:04
2025-05-04 18:26:49
card
stringlengths
11
1.01M
sissiki/Meta-Llama-3.1-70B-Instruct-SQINT8
sissiki
"2025-05-04T16:44:17Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "compressed-tensors", "region:us" ]
text-generation
"2025-05-04T16:30:08Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kostiantynk-outlook/f8c7bc15-9d6c-4f0e-ac1e-a2548a85d08e
kostiantynk-outlook
"2025-05-04T16:23:42Z"
0
0
peft
[ "peft", "safetensors", "generated_from_trainer", "dataset:92a4ab705f6ca41d_train_data.json", "base_model:unsloth/llama-2-7b", "base_model:adapter:unsloth/llama-2-7b", "region:us" ]
null
"2025-05-04T16:23:18Z"
--- library_name: peft tags: - generated_from_trainer datasets: - 92a4ab705f6ca41d_train_data.json base_model: unsloth/llama-2-7b model-index: - name: kostiantynk-outlook/f8c7bc15-9d6c-4f0e-ac1e-a2548a85d08e results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # kostiantynk-outlook/f8c7bc15-9d6c-4f0e-ac1e-a2548a85d08e This model was trained from scratch on the /workspace/input_data/92a4ab705f6ca41d_train_data.json dataset. It achieves the following results on the evaluation set: - Loss: 0.4996 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ### Framework versions - PEFT 0.15.2 - Transformers 4.51.3 - Pytorch 2.5.1+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
apriasmoro/b8bf40ac-3480-40c4-922d-41496ae167fb
apriasmoro
"2025-05-04T15:51:26Z"
0
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "llama", "text-generation", "generated_from_trainer", "axolotl", "dpo", "trl", "conversational", "arxiv:2305.18290", "base_model:MLP-KTLim/llama-3-Korean-Bllossom-8B", "base_model:finetune:MLP-KTLim/llama-3-Korean-Bllossom-8B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-05-04T15:47:17Z"
--- base_model: MLP-KTLim/llama-3-Korean-Bllossom-8B library_name: transformers model_name: b8bf40ac-3480-40c4-922d-41496ae167fb tags: - generated_from_trainer - axolotl - dpo - trl licence: license --- # Model Card for b8bf40ac-3480-40c4-922d-41496ae167fb This model is a fine-tuned version of [MLP-KTLim/llama-3-Korean-Bllossom-8B](https://huggingface.co/MLP-KTLim/llama-3-Korean-Bllossom-8B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="apriasmoro/b8bf40ac-3480-40c4-922d-41496ae167fb", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/apriasmoro-abcstudio/llama3_dpo/runs/38j1qy41) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.12.0 - Transformers: 4.46.3 - Pytorch: 2.5.1+cu124 - Datasets: 3.1.0 - Tokenizers: 0.20.3 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
dnotitia/Smoothie-Qwen2.5-1.5B-Instruct
dnotitia
"2025-05-04T15:07:37Z"
0
1
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "dnotitia", "nlp", "llm", "conversation", "chat", "conversational", "en", "base_model:Qwen/Qwen2.5-1.5B-Instruct", "base_model:finetune:Qwen/Qwen2.5-1.5B-Instruct", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-04-22T14:05:52Z"
--- language: - en license: apache-2.0 tags: - dnotitia - nlp - llm - conversation - chat base_model: - Qwen/Qwen2.5-1.5B-Instruct library_name: transformers pipeline_tag: text-generation --- # Smoothie Qwen <img src="https://github.com/dnotitia/smoothie-qwen/raw/main/asset/smoothie-qwen-logo.png" width="400" style="max-width: 100%;"> **Smoothie Qwen** is a lightweight adjustment tool that smooths token probabilities in Qwen and similar models, enhancing balanced multilingual generation capabilities. For more details, please refer to <https://github.com/dnotitia/smoothie-qwen>. - Base model: Qwen/Qwen2.5-1.5B-Instruct
Svngoku/AfricanHairFluxLora
Svngoku
"2025-05-04T14:09:10Z"
4
1
diffusers
[ "diffusers", "flux", "text-to-image", "lora", "fal", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
"2025-04-27T11:00:58Z"
--- tags: - flux - text-to-image - lora - diffusers - fal base_model: black-forest-labs/FLUX.1-dev instance_prompt: Afro Hair license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md widget: - text: Photography of styled Afro Hair, style instagram png output: url: images/example_ufdsuof10.png - text: >- Photography of styled Afro Hair, commercial ads, cosmetics, hair powered shampoo output: url: images/example_qxbh4iyff.png - text: >- 8k photorealistic image of an older black skin grandmother with wrinkles, beautiful silver white dread locks in black head wrap , round rim glasses, different angles, character sheets, with the Afro Hair style output: url: images/example_3ubr5t0fh.png --- # AfricanHairFluxLora <Gallery /> ## Model description ## Trigger words You should use `Afro Hair` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/Svngoku/AfricanHairFluxLora/tree/main) them in the Files & versions tab. ## Training at fal.ai Training was done using [fal.ai/models/fal-ai/flux-lora-fast-training](https://fal.ai/models/fal-ai/flux-lora-fast-training).
shima751284/cat
shima751284
"2025-05-04T14:00:44Z"
0
0
null
[ "license:artistic-2.0", "region:us" ]
null
"2025-05-04T14:00:44Z"
--- license: artistic-2.0 ---
mjs227/rltu_grpo_10_0_249-llama-merged
mjs227
"2025-05-04T13:51:55Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-05-04T13:31:22Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
dgambettaphd/M_llm2_gen10_WXS_doc1000_synt64_lr1e-04_acm_FRESH
dgambettaphd
"2025-05-04T13:41:36Z"
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2025-05-04T13:41:24Z"
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Dayy010897/Pi
Dayy010897
"2025-05-04T13:12:30Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2025-05-04T13:12:30Z"
--- license: apache-2.0 ---
akii0w0/outputs-durable_freckled_reindeer
akii0w0
"2025-05-04T12:58:29Z"
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am durable freckled reindeer", "unsloth", "trl", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "endpoints_compatible", "region:us" ]
null
"2025-05-04T12:58:21Z"
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: outputs-durable_freckled_reindeer tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am durable freckled reindeer - unsloth - trl licence: license --- # Model Card for outputs-durable_freckled_reindeer This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="akii0w0/outputs-durable_freckled_reindeer", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.0 - Pytorch: 2.6.0 - Datasets: 2.21.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Gwangwoon/muse2
Gwangwoon
"2025-05-04T12:31:25Z"
141
0
peft
[ "peft", "safetensors", "zho", "eng", "fra", "spa", "por", "deu", "ita", "rus", "jpn", "kor", "vie", "tha", "ara", "arxiv:1910.09700", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:adapter:Qwen/Qwen2.5-7B-Instruct", "region:us" ]
null
"2025-03-28T05:09:40Z"
--- base_model: Qwen/Qwen2.5-7B-Instruct library_name: peft language: - zho - eng - fra - spa - por - deu - ita - rus - jpn - kor - vie - tha - ara --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.1
pranitha02/my-lora-model
pranitha02
"2025-05-04T12:25:47Z"
0
0
diffusers
[ "diffusers", "tensorboard", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
"2025-05-04T10:05:23Z"
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 library_name: diffusers license: openrail++ instance_prompt: watercolor style image widget: [] tags: - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - pranitha02/my-lora-model <Gallery /> ## Model description These are pranitha02/my-lora-model LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use watercolor style image to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](pranitha02/my-lora-model/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
SimonYKLi/llama3-8b-sentiment-may-3-2024
SimonYKLi
"2025-05-04T12:13:32Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "unsloth", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-05-04T12:08:19Z"
--- library_name: transformers tags: - unsloth - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
npv2k1/Qwen2.5-7B-n8n
npv2k1
"2025-05-04T12:08:15Z"
0
0
null
[ "gguf", "qwen2", "text-generation", "en", "dataset:npv2k1/n8n-workflow", "base_model:unsloth/Qwen2.5-7B", "base_model:quantized:unsloth/Qwen2.5-7B", "license:mit", "endpoints_compatible", "region:us" ]
text-generation
"2025-05-04T08:53:23Z"
--- license: mit datasets: - npv2k1/n8n-workflow language: - en base_model: - unsloth/Qwen2.5-7B pipeline_tag: text-generation ---
cali50/cali50
cali50
"2025-05-04T11:39:48Z"
0
0
null
[ "license:bigscience-bloom-rail-1.0", "region:us" ]
null
"2025-05-04T11:39:46Z"
--- license: bigscience-bloom-rail-1.0 ---
mlabonne/Qwen3-14B-abliterated
mlabonne
"2025-05-04T11:25:10Z"
173
10
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2025-04-29T22:00:18Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
iamwille/wav2vec2-large-xls-r-300m-hausa-colab
iamwille
"2025-05-04T11:24:48Z"
0
0
transformers
[ "transformers", "safetensors", "wav2vec2", "automatic-speech-recognition", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2025-05-04T03:30:28Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MrRobotoAI/114-Q4_K_M-GGUF
MrRobotoAI
"2025-05-04T11:17:02Z"
145
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:MrRobotoAI/114", "base_model:quantized:MrRobotoAI/114", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-05-04T11:16:40Z"
--- base_model: MrRobotoAI/114 library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo --- # MrRobotoAI/114-Q4_K_M-GGUF This model was converted to GGUF format from [`MrRobotoAI/114`](https://huggingface.co/MrRobotoAI/114) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/MrRobotoAI/114) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo MrRobotoAI/114-Q4_K_M-GGUF --hf-file 114-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo MrRobotoAI/114-Q4_K_M-GGUF --hf-file 114-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo MrRobotoAI/114-Q4_K_M-GGUF --hf-file 114-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo MrRobotoAI/114-Q4_K_M-GGUF --hf-file 114-q4_k_m.gguf -c 2048 ```
mesolitica/Malaysian-Qwen2.5-32B-Instruct
mesolitica
"2025-05-04T10:45:38Z"
0
0
null
[ "safetensors", "qwen2", "ms", "en", "zh", "ta", "region:us" ]
null
"2025-04-24T14:55:40Z"
--- language: - ms - en - zh - ta --- # Malaysian Qwen 2.5 32B Instruct Continue finetuning https://huggingface.co/Qwen/Qwen2.5-32B-Instruct on highly curated 1.5B tokens Malaysian instruction dataset. ## Improvement 1. Support respond in Mandarin, Tamil, Jawi, Manglish, Johor, Kedah, Kelantan, Pahang, Perak, Sabah, Sarawak, Selangor, Negeri Sembilan and Terengganu. 2. Able to code in Mandarin, Tamil, Jawi, Manglish, Johor, Kedah, Kelantan, Pahang, Perak, Sabah, Sarawak, Selangor, Negeri Sembilan and Terengganu. 3. Multi-turn Malaysian context such as related to Malaysian Legislation, politics, religions and languages. ## Training session Finetune on [mesolitica/Malaysian-SFT](https://huggingface.co/datasets/mesolitica/Malaysian-SFT) to make the model understand Malaysian context. ## How we train 1. LoRA on `["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj", "embed_tokens", "lm_head"]`. 2. 128 Rank with alpha 256, or alpha of 2.0 3. Multipacking 8192 context length with proper SDPA causal masking to prevent document contamination and also make sure proper position ids. 4. Chunk CCE loss for LoRA. 5. WanDB at https://wandb.ai/huseinzol05/lora-embedding-128-qwen2.5-32b-malaysian-8k?nw=nwuserhuseinzol05 Source code at https://github.com/mesolitica/malaya/tree/master/session/qwen2.5 ## Acknowledgement Special thanks to https://www.sns.com.my for 8x H100 node!
kreasof-ai/nllb-200-600M-bem2eng-bigc-flores200-tatoeba
kreasof-ai
"2025-05-04T10:42:01Z"
65
0
transformers
[ "transformers", "safetensors", "m2m_100", "text2text-generation", "generated_from_trainer", "translation", "af", "en", "dataset:kreasof-ai/bigc-bem-eng", "dataset:kreasof-ai/flores200-eng-bem", "dataset:kreasof-ai/tatoeba-eng-bem-backtranslation", "base_model:facebook/nllb-200-distilled-600M", "base_model:finetune:facebook/nllb-200-distilled-600M", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
translation
"2025-04-13T08:54:09Z"
--- library_name: transformers license: cc-by-nc-4.0 base_model: facebook/nllb-200-distilled-600M tags: - generated_from_trainer metrics: - bleu - chrf - comet model-index: - name: nllb-200-distilled-600M-bem2en-flores200 results: [] datasets: - kreasof-ai/bigc-bem-eng - kreasof-ai/flores200-eng-bem - kreasof-ai/tatoeba-eng-bem-backtranslation language: - af - en pipeline_tag: translation --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # nllb-200-distilled-600M-bem2en-flores200 This model is a fine-tuned version of [facebook/nllb-200-distilled-600M](https://huggingface.co/facebook/nllb-200-distilled-600M) on the [Big-C dataset](https://huggingface.co/datasets/kreasof-ai/bem-eng-bigc), [Tatoeba Augmented Dataset](https://huggingface.co/datasets/kreasof-ai/tatoeba-eng-bem-backtranslation), and [FLORES-200 Dataset](kreasof-ai/flores200-eng-bem). It achieves the following results on the evaluation set: - Loss: 0.1761 - Bleu: 27.39 - Chrf: 51.72 ## Model description This model is a translation model that translate Bemba to English. This model is trained on [facebook/nllb-200-distilled-600M](https://huggingface.co/facebook/nllb-200-distilled-600M). There are two model versions in this repo. One is trained using Big-C and FLORES-200, with commit hash `49cd2`. The other is trained using Big-C, FLORES-200, and Tatoeba Dataset, with commit hash `b7ab3`. ## Intended uses This model is applied to the Bemba-to-English translation task as part of the IWSLT 2025 Low-Resource Track. ## Training and evaluation data This model is trained using the `train+val` split from Big-C Dataset, `train` split from Augmented Tatoeba Dataset, and `dev` split from FLORES-200 Dataset. Meanwhile for evaluation, this model used `test` split from Big-C and `devtest` split from FLORES-200. ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.03 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Chrf | |:-------------:|:-----:|:-----:|:---------------:|:-----:|:-----:| | 0.1664 | 1.0 | 13236 | 0.1843 | 25.87 | 50.74 | | 0.1399 | 2.0 | 26472 | 0.1773 | 26.95 | 51.3 | | 0.126 | 3.0 | 39708 | 0.1761 | 27.39 | 51.72 | ### Model Evaluation Performance of this model was evaluated using BLEU, ChrF++, and AfriCOMET on the test split of Big-C Dataset. | Commit-Hash|Bleu | ChrF++|AfriCOMET| |:----------:|:-----:|:-----:|:-------:| |49cd2 | 27.96 | 51.03 | 53.29 | |b7ab3 | 28.6 | 51.38 | 53.08 | ### Framework versions - Transformers 4.51.3 - Pytorch 2.2.0+cu121 - Datasets 3.5.0 - Tokenizers 0.21.1 ## Citation ``` @inproceedings{nllb2022, title = {No Language Left Behind: Scaling Human-Centered Machine Translation}, author = {Costa-jussà, Marta R. and Cross, James and et al.}, booktitle = {Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing (EMNLP)}, year = {2022}, publisher = {Association for Computational Linguistics}, url = {https://aclanthology.org/2022.emnlp-main.9} } @inproceedings{sikasote-etal-2023-big, title = "{BIG}-{C}: a Multimodal Multi-Purpose Dataset for {B}emba", author = "Sikasote, Claytone and Mukonde, Eunice and Alam, Md Mahfuz Ibn and Anastasopoulos, Antonios", editor = "Rogers, Anna and Boyd-Graber, Jordan and Okazaki, Naoaki", booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = jul, year = "2023", address = "Toronto, Canada", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.acl-long.115", doi = "10.18653/v1/2023.acl-long.115", pages = "2062--2078", abstract = "We present BIG-C (Bemba Image Grounded Conversations), a large multimodal dataset for Bemba. While Bemba is the most populous language of Zambia, it exhibits a dearth of resources which render the development of language technologies or language processing research almost impossible. The dataset is comprised of multi-turn dialogues between Bemba speakers based on images, transcribed and translated into English. There are more than 92,000 utterances/sentences, amounting to more than 180 hours of audio data with corresponding transcriptions and English translations. We also provide baselines on speech recognition (ASR), machine translation (MT) and speech translation (ST) tasks, and sketch out other potential future multimodal uses of our dataset. We hope that by making the dataset available to the research community, this work will foster research and encourage collaboration across the language, speech, and vision communities especially for languages outside the {``}traditionally{''} used high-resourced ones. All data and code are publicly available: [\url{https://github.com/csikasote/bigc}](\url{https://github.com/csikasote/bigc}).", } Copy@inproceedings{wang-etal-2024-afrimte, title = "{A}fri{MTE} and {A}fri{COMET}: Enhancing {COMET} to Embrace Under-resourced {A}frican Languages", author = "Wang, Jiayi and Adelani, David and Agrawal, Sweta and Masiak, Marek and Rei, Ricardo and Briakou, Eleftheria and Carpuat, Marine and He, Xuanli and others", booktitle = "Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)", month = "jun", year = "2024", address = "Mexico City, Mexico", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.naacl-long.334/", doi = "10.18653/v1/2024.naacl-long.334", pages = "5997--6023" } @inproceedings{wang2024evaluating, title={Evaluating WMT 2024 Metrics Shared Task Submissions on AfriMTE (the African Challenge Set)}, author={Wang, Jiayi and Adelani, David Ifeoluwa and Stenetorp, Pontus}, booktitle={Proceedings of the Ninth Conference on Machine Translation}, pages={505--516}, year={2024} } @inproceedings{freitag2024llms, title={Are LLMs breaking MT metrics? results of the WMT24 metrics shared task}, author={Freitag, Markus and Mathur, Nitika and Deutsch, Daniel and Lo, Chi-Kiu and Avramidis, Eleftherios and Rei, Ricardo and Thompson, Brian and Blain, Frederic and Kocmi, Tom and Wang, Jiayi and others}, booktitle={Proceedings of the Ninth Conference on Machine Translation}, pages={47--81}, year={2024} } ``` # Contact This model was trained by [Hazim](https://huggingface.co/cobrayyxx). # Acknowledgments Huge thanks to [Yasmin Moslem](https://huggingface.co/ymoslem) for her supervision, and [Habibullah Akbar](https://huggingface.co/ChavyvAkvar) the founder of Kreasof-AI, for his leadership and support.
laampt/lecun-showcase
laampt
"2025-05-04T10:30:05Z"
0
0
null
[ "onnx", "license:apache-2.0", "region:us" ]
null
"2025-05-04T10:29:13Z"
--- license: apache-2.0 ---
datapaf/ve_focus_starcoder2_racket
datapaf
"2025-05-04T10:26:48Z"
0
0
transformers
[ "transformers", "safetensors", "starcoder2", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-05-04T09:52:00Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
worstchan/EAT-base_epoch30_pretrain
worstchan
"2025-05-04T10:25:52Z"
0
0
transformers
[ "transformers", "safetensors", "eat", "feature-extraction", "Audio", "SSL", "EAT", "custom_code", "arxiv:2401.03497", "license:mit", "region:us" ]
feature-extraction
"2025-05-03T06:30:33Z"
--- license: mit tags: - Audio - SSL - EAT library_name: transformers --- # EAT-base (Epoch 30, Pre-trained checkpoint) This is the **pre-trained EAT-base model** at epoch 30, trained on the AS-2M dataset with audio self-supervised learning. The model provides efficient feature extraction for downstream audio understanding tasks such as audio classification and audio captioning. ## 🔧 Usage You can load and use the model for feature extraction directly via Hugging Face Transformers: ```python import torchaudio import torch import soundfile as sf import numpy as np from transformers import AutoModel model_id = "worstchan/EAT-base_epoch30_pretrain" model = AutoModel.from_pretrained(model_id, trust_remote_code=True).eval().cuda() source_file = "/path/to/input.wav" target_file = "/path/to/output.npy" target_length = 1024 # Recommended: 1024 for 10s audio norm_mean = -4.268 norm_std = 4.569 # Load and resample audio wav, sr = sf.read(source_file) waveform = torch.tensor(wav).float().cuda() if sr != 16000: waveform = torchaudio.functional.resample(waveform, sr, 16000) # Normalize and convert to mel-spectrogram waveform = waveform - waveform.mean() mel = torchaudio.compliance.kaldi.fbank( waveform.unsqueeze(0), htk_compat=True, sample_frequency=16000, use_energy=False, window_type='hanning', num_mel_bins=128, dither=0.0, frame_shift=10 ).unsqueeze(0) # Pad or truncate n_frames = mel.shape[1] if n_frames < target_length: mel = torch.nn.ZeroPad2d((0, 0, 0, target_length - n_frames))(mel) else: mel = mel[:, :target_length, :] # Normalize mel = (mel - norm_mean) / (norm_std * 2) mel = mel.unsqueeze(0).cuda() # shape: [1, 1, T, F] # Extract features with torch.no_grad(): feat = model.extract_features(mel) feat = feat.squeeze(0).cpu().numpy() np.save(target_file, feat) print(f"Feature shape: {feat.shape}") print(f"Saved to: {target_file}") ``` ## 📌 Notes The model supports both **frame-level** (\~50Hz) and **utterance-level** representations (CLS token). (See [feature extraction guide](https://github.com/cwx-worst-one/EAT/tree/main/feature_extract) for detailed instructions.) ## 🧪 Checkpoints This model was trained for 30 epochs on AS-2M using the EAT framework. For more checkpoints and fine-tuned versions, see the [EAT project repository](https://github.com/cwx-worst-one/EAT). ## 📚 Citation If you find this model useful, please consider citing our [paper](https://arxiv.org/abs/2401.03497): ```bibtex @article{chen2024eat, title={EAT: Self-supervised pre-training with efficient audio transformer}, author={Chen, Wenxi and Liang, Yuzhe and Ma, Ziyang and Zheng, Zhisheng and Chen, Xie}, journal={arXiv preprint arXiv:2401.03497}, year={2024} }
ivangrapher/80608c4e-ff42-4c7c-831c-da4b82726652
ivangrapher
"2025-05-04T10:24:55Z"
0
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:Qwen/Qwen2.5-Coder-7B-Instruct", "base_model:adapter:Qwen/Qwen2.5-Coder-7B-Instruct", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
"2025-05-04T09:03:04Z"
--- library_name: peft license: apache-2.0 base_model: Qwen/Qwen2.5-Coder-7B-Instruct tags: - axolotl - generated_from_trainer model-index: - name: 80608c4e-ff42-4c7c-831c-da4b82726652 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml absolute_data_files: false adapter: lora base_model: Qwen/Qwen2.5-Coder-7B-Instruct bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - 6b473e47395e4472_train_data.json ds_type: json format: custom path: /workspace/input_data/6b473e47395e4472_train_data.json type: field_input: context field_instruction: instruction field_output: response format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.55 group_by_length: false hub_model_id: ivangrapher/80608c4e-ff42-4c7c-831c-da4b82726652 hub_repo: null hub_strategy: end hub_token: null learning_rate: 1.0e-06 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 150 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/6b473e47395e4472_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 2048 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: eb4497fe-04bc-4cc5-9104-87e75a418525 wandb_project: s56-7 wandb_run: your_name wandb_runid: eb4497fe-04bc-4cc5-9104-87e75a418525 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 80608c4e-ff42-4c7c-831c-da4b82726652 This model is a fine-tuned version of [Qwen/Qwen2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.0490 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 150 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.0553 | 0.0046 | 150 | 2.0490 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
robinfaro/StandardMoE-1B-fineweb_edu-90BT
robinfaro
"2025-05-04T10:10:16Z"
1
0
null
[ "safetensors", "moegpt", "model_hub_mixin", "pytorch_model_hub_mixin", "custom_code", "region:us" ]
null
"2025-04-28T07:22:35Z"
--- tags: - model_hub_mixin - pytorch_model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Code: [More Information Needed] - Paper: [More Information Needed] - Docs: [More Information Needed]
John6666/aetheria-v10-sdxl
John6666
"2025-05-04T09:57:57Z"
0
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "anime", "girls", "styles", "cute", "merge", "Illustrious XL v2.0", "illustrious", "en", "base_model:OnomaAIResearch/Illustrious-XL-v2.0", "base_model:merge:OnomaAIResearch/Illustrious-XL-v2.0", "base_model:yyy1026/songMix", "base_model:merge:yyy1026/songMix", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
"2025-05-04T09:52:13Z"
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ language: - en library_name: diffusers pipeline_tag: text-to-image tags: - text-to-image - stable-diffusion - stable-diffusion-xl - anime - girls - styles - cute - merge - Illustrious XL v2.0 - illustrious base_model: - OnomaAIResearch/Illustrious-XL-v2.0 - yyy1026/songMix --- Original model is [here](https://civitai.com/models/1541411/aetheria?modelVersionId=1744078). This model created by [morisoba777783](https://civitai.com/user/morisoba777783).
vmpsergio/d9a23fdc-e06b-418d-bd48-8cc795eac64f
vmpsergio
"2025-05-04T09:31:14Z"
0
0
peft
[ "peft", "safetensors", "starcoder2", "axolotl", "generated_from_trainer", "base_model:bigcode/starcoder2-3b", "base_model:adapter:bigcode/starcoder2-3b", "license:bigcode-openrail-m", "8-bit", "bitsandbytes", "region:us" ]
null
"2025-05-04T09:17:57Z"
--- library_name: peft license: bigcode-openrail-m base_model: bigcode/starcoder2-3b tags: - axolotl - generated_from_trainer model-index: - name: d9a23fdc-e06b-418d-bd48-8cc795eac64f results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml absolute_data_files: false adapter: lora base_model: bigcode/starcoder2-3b bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - 1d3219f72b2f3c95_train_data.json ds_type: json format: custom path: /workspace/input_data/1d3219f72b2f3c95_train_data.json type: field_instruction: question field_output: answer format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.5 group_by_length: false hub_model_id: vmpsergio/d9a23fdc-e06b-418d-bd48-8cc795eac64f hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-06 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 6 mixed_precision: bf16 mlflow_experiment_name: /tmp/1d3219f72b2f3c95_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 special_tokens: pad_token: <|endoftext|> strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 522073c0-1c50-4bda-be86-86bd642b495a wandb_project: s56-2 wandb_run: your_name wandb_runid: 522073c0-1c50-4bda-be86-86bd642b495a warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # d9a23fdc-e06b-418d-bd48-8cc795eac64f This model is a fine-tuned version of [bigcode/starcoder2-3b](https://huggingface.co/bigcode/starcoder2-3b) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.7563 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 6 - eval_batch_size: 6 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 5.2066 | 0.0447 | 200 | 2.7563 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Chintooo/gemma-text-to-sql
Chintooo
"2025-05-04T09:17:52Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:google/gemma-3-1b-pt", "base_model:finetune:google/gemma-3-1b-pt", "endpoints_compatible", "region:us" ]
null
"2025-05-04T08:21:42Z"
--- base_model: google/gemma-3-1b-pt library_name: transformers model_name: gemma-text-to-sql tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for gemma-text-to-sql This model is a fine-tuned version of [google/gemma-3-1b-pt](https://huggingface.co/google/gemma-3-1b-pt). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Chintooo/gemma-text-to-sql", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0+cu124 - Datasets: 3.3.2 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
RohanKumarMishra/detr-finetuned-balloon-v2
RohanKumarMishra
"2025-05-04T09:10:56Z"
0
0
transformers
[ "transformers", "safetensors", "detr", "object-detection", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
object-detection
"2025-05-04T09:10:44Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Tenshiren/medqwen_lora
Tenshiren
"2025-05-04T09:05:29Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2_5_vl", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2025-05-04T09:05:15Z"
--- base_model: unsloth/qwen2.5-vl-7b-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2_5_vl - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Tenshiren - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-vl-7b-instruct-bnb-4bit This qwen2_5_vl model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
mradermacher/8-bit_medical_Qwen1.5-7B-Chat-i1-GGUF
mradermacher
"2025-05-04T09:00:28Z"
15
0
transformers
[ "transformers", "gguf", "medical", "llama-factory", "en", "base_model:Roselia-penguin/8-bit_medical_Qwen1.5-7B-Chat", "base_model:quantized:Roselia-penguin/8-bit_medical_Qwen1.5-7B-Chat", "license:apache-2.0", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
"2025-05-04T03:40:57Z"
--- base_model: Roselia-penguin/8-bit_medical_Qwen1.5-7B-Chat language: - en library_name: transformers license: apache-2.0 quantized_by: mradermacher tags: - medical - llama-factory --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/Roselia-penguin/8-bit_medical_Qwen1.5-7B-Chat <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/8-bit_medical_Qwen1.5-7B-Chat-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/8-bit_medical_Qwen1.5-7B-Chat-i1-GGUF/resolve/main/8-bit_medical_Qwen1.5-7B-Chat.i1-IQ1_S.gguf) | i1-IQ1_S | 2.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/8-bit_medical_Qwen1.5-7B-Chat-i1-GGUF/resolve/main/8-bit_medical_Qwen1.5-7B-Chat.i1-IQ1_M.gguf) | i1-IQ1_M | 2.3 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/8-bit_medical_Qwen1.5-7B-Chat-i1-GGUF/resolve/main/8-bit_medical_Qwen1.5-7B-Chat.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.5 | | | [GGUF](https://huggingface.co/mradermacher/8-bit_medical_Qwen1.5-7B-Chat-i1-GGUF/resolve/main/8-bit_medical_Qwen1.5-7B-Chat.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.6 | | | [GGUF](https://huggingface.co/mradermacher/8-bit_medical_Qwen1.5-7B-Chat-i1-GGUF/resolve/main/8-bit_medical_Qwen1.5-7B-Chat.i1-IQ2_S.gguf) | i1-IQ2_S | 2.9 | | | [GGUF](https://huggingface.co/mradermacher/8-bit_medical_Qwen1.5-7B-Chat-i1-GGUF/resolve/main/8-bit_medical_Qwen1.5-7B-Chat.i1-Q2_K_S.gguf) | i1-Q2_K_S | 3.0 | very low quality | | [GGUF](https://huggingface.co/mradermacher/8-bit_medical_Qwen1.5-7B-Chat-i1-GGUF/resolve/main/8-bit_medical_Qwen1.5-7B-Chat.i1-IQ2_M.gguf) | i1-IQ2_M | 3.0 | | | [GGUF](https://huggingface.co/mradermacher/8-bit_medical_Qwen1.5-7B-Chat-i1-GGUF/resolve/main/8-bit_medical_Qwen1.5-7B-Chat.i1-Q2_K.gguf) | i1-Q2_K | 3.2 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/8-bit_medical_Qwen1.5-7B-Chat-i1-GGUF/resolve/main/8-bit_medical_Qwen1.5-7B-Chat.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 3.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/8-bit_medical_Qwen1.5-7B-Chat-i1-GGUF/resolve/main/8-bit_medical_Qwen1.5-7B-Chat.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.5 | | | [GGUF](https://huggingface.co/mradermacher/8-bit_medical_Qwen1.5-7B-Chat-i1-GGUF/resolve/main/8-bit_medical_Qwen1.5-7B-Chat.i1-IQ3_S.gguf) | i1-IQ3_S | 3.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/8-bit_medical_Qwen1.5-7B-Chat-i1-GGUF/resolve/main/8-bit_medical_Qwen1.5-7B-Chat.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.7 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/8-bit_medical_Qwen1.5-7B-Chat-i1-GGUF/resolve/main/8-bit_medical_Qwen1.5-7B-Chat.i1-IQ3_M.gguf) | i1-IQ3_M | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/8-bit_medical_Qwen1.5-7B-Chat-i1-GGUF/resolve/main/8-bit_medical_Qwen1.5-7B-Chat.i1-Q3_K_M.gguf) | i1-Q3_K_M | 4.0 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/8-bit_medical_Qwen1.5-7B-Chat-i1-GGUF/resolve/main/8-bit_medical_Qwen1.5-7B-Chat.i1-Q3_K_L.gguf) | i1-Q3_K_L | 4.3 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/8-bit_medical_Qwen1.5-7B-Chat-i1-GGUF/resolve/main/8-bit_medical_Qwen1.5-7B-Chat.i1-IQ4_XS.gguf) | i1-IQ4_XS | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/8-bit_medical_Qwen1.5-7B-Chat-i1-GGUF/resolve/main/8-bit_medical_Qwen1.5-7B-Chat.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.6 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/8-bit_medical_Qwen1.5-7B-Chat-i1-GGUF/resolve/main/8-bit_medical_Qwen1.5-7B-Chat.i1-Q4_0.gguf) | i1-Q4_0 | 4.6 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/8-bit_medical_Qwen1.5-7B-Chat-i1-GGUF/resolve/main/8-bit_medical_Qwen1.5-7B-Chat.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.6 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/8-bit_medical_Qwen1.5-7B-Chat-i1-GGUF/resolve/main/8-bit_medical_Qwen1.5-7B-Chat.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.9 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/8-bit_medical_Qwen1.5-7B-Chat-i1-GGUF/resolve/main/8-bit_medical_Qwen1.5-7B-Chat.i1-Q4_1.gguf) | i1-Q4_1 | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/8-bit_medical_Qwen1.5-7B-Chat-i1-GGUF/resolve/main/8-bit_medical_Qwen1.5-7B-Chat.i1-Q5_K_S.gguf) | i1-Q5_K_S | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/8-bit_medical_Qwen1.5-7B-Chat-i1-GGUF/resolve/main/8-bit_medical_Qwen1.5-7B-Chat.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.6 | | | [GGUF](https://huggingface.co/mradermacher/8-bit_medical_Qwen1.5-7B-Chat-i1-GGUF/resolve/main/8-bit_medical_Qwen1.5-7B-Chat.i1-Q6_K.gguf) | i1-Q6_K | 6.4 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
mradermacher/MedicalEDI-14b-EDI-Reasoning-400-i1-GGUF
mradermacher
"2025-05-04T08:56:24Z"
16
0
transformers
[ "transformers", "gguf", "en", "base_model:Shaleen123/MedicalEDI-14b-EDI-Reasoning-400", "base_model:quantized:Shaleen123/MedicalEDI-14b-EDI-Reasoning-400", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
"2025-05-04T02:50:09Z"
--- base_model: Shaleen123/MedicalEDI-14b-EDI-Reasoning-400 language: - en library_name: transformers quantized_by: mradermacher tags: [] --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/Shaleen123/MedicalEDI-14b-EDI-Reasoning-400 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-400-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-400-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-400.i1-IQ1_S.gguf) | i1-IQ1_S | 3.7 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-400-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-400.i1-IQ1_M.gguf) | i1-IQ1_M | 4.0 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-400-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-400.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-400-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-400.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-400-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-400.i1-IQ2_S.gguf) | i1-IQ2_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-400-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-400.i1-IQ2_M.gguf) | i1-IQ2_M | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-400-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-400.i1-Q2_K_S.gguf) | i1-Q2_K_S | 5.5 | very low quality | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-400-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-400.i1-Q2_K.gguf) | i1-Q2_K | 5.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-400-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-400.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 6.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-400-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-400.i1-IQ3_XS.gguf) | i1-IQ3_XS | 6.5 | | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-400-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-400.i1-Q3_K_S.gguf) | i1-Q3_K_S | 6.8 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-400-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-400.i1-IQ3_S.gguf) | i1-IQ3_S | 6.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-400-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-400.i1-IQ3_M.gguf) | i1-IQ3_M | 7.0 | | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-400-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-400.i1-Q3_K_M.gguf) | i1-Q3_K_M | 7.4 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-400-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-400.i1-Q3_K_L.gguf) | i1-Q3_K_L | 8.0 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-400-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-400.i1-IQ4_XS.gguf) | i1-IQ4_XS | 8.2 | | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-400-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-400.i1-Q4_0.gguf) | i1-Q4_0 | 8.6 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-400-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-400.i1-IQ4_NL.gguf) | i1-IQ4_NL | 8.6 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-400-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-400.i1-Q4_K_S.gguf) | i1-Q4_K_S | 8.7 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-400-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-400.i1-Q4_K_M.gguf) | i1-Q4_K_M | 9.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-400-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-400.i1-Q4_1.gguf) | i1-Q4_1 | 9.5 | | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-400-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-400.i1-Q5_K_S.gguf) | i1-Q5_K_S | 10.4 | | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-400-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-400.i1-Q5_K_M.gguf) | i1-Q5_K_M | 10.6 | | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-400-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-400.i1-Q6_K.gguf) | i1-Q6_K | 12.2 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
FaceTech/FaceTech
FaceTech
"2025-05-04T08:51:17Z"
0
1
null
[ "base_model:FunAudioLLM/SenseVoiceSmall", "base_model:finetune:FunAudioLLM/SenseVoiceSmall", "license:apache-2.0", "region:us" ]
null
"2025-05-03T09:17:13Z"
--- license: apache-2.0 base_model: - FunAudioLLM/SenseVoiceSmall --- # FaceTech ## 项目简介 ### 音盾视卫——深度伪造音视频检测系统 ![Static Badge](https://img.shields.io/badge/GitHub-FaceTech-red?logo=GitHub&link=https%3A%2F%2Fgithub.com%2FDingdust%2FFaceTech) ![Static Badge](https://img.shields.io/badge/HuggingFace-FaceTech-yellow?logo=HuggingFace&link=https%3A%2F%2Fhuggingface.co%2FFaceTech%2FFaceTech) ### 项目开发环境 ![Static Badge](https://img.shields.io/badge/Python-3.13-blue?logo=Python&link=https%3A%2F%2Fwww.python.org%2F) ![Static Badge](https://img.shields.io/badge/PyCharm-2025.1-green?logo=PyCharm&link=https%3A%2F%2Fwww.jetbrains.com.cn%2Fpycharm%2F) ![Static Badge](https://img.shields.io/badge/PyTorch-2.7.0%2Bcu118-orange?logo=PyTorch&link=https%3A%2F%2Fmirrors.aliyun.com%2Fpytorch-wheels%2Fcu118%2F) ## 安装步骤 #### 项目提供了一键安装包`setup.ps1`,供Windows环境进行一键安装部署 ```powershell git clone https://github.com/Dingdust/FaceTech.git conda create -n FaceTech python=3.13 -y conda activate FaceTech pip install torch torchaudio torchvision -f https://mirrors.aliyun.com/pytorch-wheels/cu118/ pip install "PyQt6-Fluent-Widgets[full]" -i https://pypi.org/simple/ pip install opencv-python, sounddevice, soundfile, librosa, numpy, efficientnet_pytorch Set-Location FaceTech pip install ./editdistance-0.8.1 pip install sentencepiece-0.2.0-cp313-cp313-win_amd64.whl Set-Location dlib python ./setup.py install Set-Location .. pip install funasr, face_recognition git lfs install git clone https://hf-mirror.com/FaceTech/FaceTech models Move-Item models/RawNet.pth ./audio_detect Move-Item models/deepfake_detector.pth ./deepfake_detect Move-Item models/model.pt ./audio_asr/SenseVoice Remove-Item -Path models -Recurse -Force Remove-Item -Path dlib -Recurse -Force Remove-Item -Path editdistance-0.8.1 -Recurse -Force Remove-Item sentencepiece-0.2.0-cp313-cp313-win_amd64.whl Clear-Host python FaceTech.py ``` #### 运行一键安装包前,请确保具有英伟达显卡,并支持CUDA11.8版本 #### 建议在具有科学上网条件的网络环境下进行安装及部署 ## 使用说明 运行 `FaceTech.py` 文件以启动应用程序。 #### 强迫症提示删除 * 打开当前conda环境下的Lib\site-packages\qfluentwidgets\common\config.py文件 * 注释掉以下代码 ```python # config.py 406-409行 try: print(ALERT) except UnicodeEncodeError: print(ALERT.replace("📢", "")) ``` * 或更改以下代码 ```python # config.py 14行 ALERT = "" ``` * 打开当前conda环境下的Lib\site-packages\funasr\utils\version_checker.py文件 * 注释掉以下代码 ```python # version_checker.py 19行 print(f"funasr version: {current_version}.") ``` * 打开当前conda环境下的Lib\site-packages\funasr\auto\auto_model.py文件 * 更改以下代码 ```python # auto_model.py 327行 disable_pbar = self.kwargs.get("disable_pbar", True) ``` ## 贡献指南 ### 欢迎提交问题和请求。请确保在提交之前更新文档并测试更改。
fedovtt/bdcfa455-462c-4a83-bf44-018244324bbf
fedovtt
"2025-05-04T08:50:47Z"
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/SmolLM2-360M", "base_model:adapter:unsloth/SmolLM2-360M", "license:apache-2.0", "8-bit", "bitsandbytes", "region:us" ]
null
"2025-05-04T08:45:49Z"
--- library_name: peft license: apache-2.0 base_model: unsloth/SmolLM2-360M tags: - axolotl - generated_from_trainer model-index: - name: bdcfa455-462c-4a83-bf44-018244324bbf results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml absolute_data_files: false adapter: lora base_model: unsloth/SmolLM2-360M bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - ad53ac34880a775e_train_data.json ds_type: json format: custom path: /workspace/input_data/ad53ac34880a775e_train_data.json type: field_instruction: Q field_output: A format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.55 group_by_length: false hub_model_id: fedovtt/bdcfa455-462c-4a83-bf44-018244324bbf hub_repo: null hub_strategy: end hub_token: null learning_rate: 3.0e-06 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 150 micro_batch_size: 10 mixed_precision: bf16 mlflow_experiment_name: /tmp/ad53ac34880a775e_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 2048 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: d5438032-e7ea-460b-9173-4766d4ba879d wandb_project: s56-28 wandb_run: your_name wandb_runid: d5438032-e7ea-460b-9173-4766d4ba879d warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # bdcfa455-462c-4a83-bf44-018244324bbf This model is a fine-tuned version of [unsloth/SmolLM2-360M](https://huggingface.co/unsloth/SmolLM2-360M) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8326 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-06 - train_batch_size: 10 - eval_batch_size: 10 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 150 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 2.0406 | 0.0530 | 150 | 1.8326 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
Geraldine/FineQwen3-0.6B-sft-unimarc
Geraldine
"2025-05-04T08:42:51Z"
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2025-05-04T08:41:38Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ail-sa/kevin_plus_bald_fs_v2_caption
ail-sa
"2025-05-04T08:42:35Z"
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
"2025-05-04T08:14:59Z"
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: Sid --- # Kevin_Plus_Bald_Fs_V2_Caption <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `Sid` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "Sid", "lora_weights": "https://huggingface.co/ail-sa/kevin_plus_bald_fs_v2_caption/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('ail-sa/kevin_plus_bald_fs_v2_caption', weight_name='lora.safetensors') image = pipeline('Sid').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/ail-sa/kevin_plus_bald_fs_v2_caption/discussions) to add images that show off what you’ve made with this LoRA.
Chrystal02/Regina
Chrystal02
"2025-05-04T08:32:24Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2025-05-04T08:32:24Z"
--- license: apache-2.0 ---
magicslabnu/DNABERT-2
magicslabnu
"2025-05-04T08:25:55Z"
0
0
null
[ "safetensors", "bert", "biology", "medical", "custom_code", "arxiv:2505.00598", "region:us" ]
null
"2025-05-04T08:20:24Z"
--- metrics: - matthews_correlation - f1 tags: - biology - medical --- This is the official pre-trained baseline model introduced in [Fast and Low-Cost Genomic Foundation Models via Outlier Removal ](https://arxiv.org/abs/2505.00598). We sincerely appreciate the MosaicML team for the [MosaicBERT](https://openreview.net/forum?id=5zipcfLC2Z) implementation, which serves as the base of DNABERT-2 development. DNABERT-2 is a transformer-based genome foundation model trained on multi-species genome. To load the model from huggingface: ``` import torch from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("zhihan1996/DNABERT-2-117M", trust_remote_code=True) model = AutoModel.from_pretrained("zhihan1996/DNABERT-2-117M", trust_remote_code=True) ``` To calculate the embedding of a dna sequence ``` dna = "ACGTAGCATCGGATCTATCTATCGACACTTGGTTATCGATCTACGAGCATCTCGTTAGC" inputs = tokenizer(dna, return_tensors = 'pt')["input_ids"] hidden_states = model(inputs)[0] # [1, sequence_length, 768] # embedding with mean pooling embedding_mean = torch.mean(hidden_states[0], dim=0) print(embedding_mean.shape) # expect to be 768 # embedding with max pooling embedding_max = torch.max(hidden_states[0], dim=0)[0] print(embedding_max.shape) # expect to be 768 ```
LandCruiser/sn21_omegav1_0405_3
LandCruiser
"2025-05-04T08:20:34Z"
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
"2025-05-04T08:01:32Z"
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
duongve/Loras_Diffusion_model
duongve
"2025-05-04T08:14:26Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2025-03-30T04:03:27Z"
--- license: apache-2.0 ---
ThijsL202/Qwen3-30B-A7.5B-24-Grand-Brainstorm-Q8_0-GGUF
ThijsL202
"2025-05-04T08:11:31Z"
0
0
transformers
[ "transformers", "gguf", "32 k context", "reasoning", "thinking", "qwen3", "24 experts", "llama-cpp", "gguf-my-repo", "text-generation", "base_model:DavidAU/Qwen3-30B-A7.5B-24-Grand-Brainstorm", "base_model:quantized:DavidAU/Qwen3-30B-A7.5B-24-Grand-Brainstorm", "endpoints_compatible", "region:us", "conversational" ]
text-generation
"2025-05-04T08:09:09Z"
--- base_model: DavidAU/Qwen3-30B-A7.5B-24-Grand-Brainstorm library_name: transformers pipeline_tag: text-generation tags: - 32 k context - reasoning - thinking - qwen3 - 24 experts - llama-cpp - gguf-my-repo --- # ThijsL202/Qwen3-30B-A7.5B-24-Grand-Brainstorm-Q8_0-GGUF This model was converted to GGUF format from [`DavidAU/Qwen3-30B-A7.5B-24-Grand-Brainstorm`](https://huggingface.co/DavidAU/Qwen3-30B-A7.5B-24-Grand-Brainstorm) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/DavidAU/Qwen3-30B-A7.5B-24-Grand-Brainstorm) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo ThijsL202/Qwen3-30B-A7.5B-24-Grand-Brainstorm-Q8_0-GGUF --hf-file qwen3-30b-a7.5b-24-grand-brainstorm-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo ThijsL202/Qwen3-30B-A7.5B-24-Grand-Brainstorm-Q8_0-GGUF --hf-file qwen3-30b-a7.5b-24-grand-brainstorm-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo ThijsL202/Qwen3-30B-A7.5B-24-Grand-Brainstorm-Q8_0-GGUF --hf-file qwen3-30b-a7.5b-24-grand-brainstorm-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo ThijsL202/Qwen3-30B-A7.5B-24-Grand-Brainstorm-Q8_0-GGUF --hf-file qwen3-30b-a7.5b-24-grand-brainstorm-q8_0.gguf -c 2048 ```
drwlf/PsychoQwenTiny_lora
drwlf
"2025-05-04T08:10:53Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen3", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2025-05-04T08:10:40Z"
--- base_model: unsloth/qwen3-1.7b tags: - text-generation-inference - transformers - unsloth - qwen3 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** drwlf - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen3-1.7b This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
kamelcharaf/GRPO-qwen2.5-14B-quant-qwen2.5-14B-quant-mrd3-s2-sum
kamelcharaf
"2025-05-04T08:09:21Z"
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "conversational", "arxiv:2402.03300", "base_model:kamelcharaf/Qwen2.5-14B-Instruct-quantized-4bit", "base_model:quantized:kamelcharaf/Qwen2.5-14B-Instruct-quantized-4bit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
"2025-04-12T20:15:22Z"
--- base_model: kamelcharaf/Qwen2.5-14B-Instruct-quantized-4bit library_name: transformers model_name: GRPO-qwen2.5-14B-quant-qwen2.5-14B-quant-mrd3-s2-sum tags: - generated_from_trainer licence: license --- # Model Card for GRPO-qwen2.5-14B-quant-qwen2.5-14B-quant-mrd3-s2-sum This model is a fine-tuned version of [kamelcharaf/Qwen2.5-14B-Instruct-quantized-4bit](https://huggingface.co/kamelcharaf/Qwen2.5-14B-Instruct-quantized-4bit). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="kamelcharaf/GRPO-qwen2.5-14B-quant-qwen2.5-14B-quant-mrd3-s2-sum", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/kamel-charaf-epfl/huggingface/runs/ofx0gal2) This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.17.0 - Transformers: 4.48.3 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
ftytfuy/gyutyu
ftytfuy
"2025-05-04T07:55:12Z"
0
0
null
[ "license:bigscience-bloom-rail-1.0", "region:us" ]
null
"2025-05-04T07:55:12Z"
--- license: bigscience-bloom-rail-1.0 ---
tjwjdrok/qwen2_7b_lora_tuning_test_model
tjwjdrok
"2025-05-04T07:53:45Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2025-05-04T06:58:08Z"
--- base_model: unsloth/qwen2-7b-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** tjwjdrok - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2-7b-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
gpham/all-mpnet-base-v2-setfit-arxiv
gpham
"2025-05-04T07:15:20Z"
3
0
setfit
[ "setfit", "safetensors", "mpnet", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:sentence-transformers/all-mpnet-base-v2", "base_model:finetune:sentence-transformers/all-mpnet-base-v2", "model-index", "region:us" ]
text-classification
"2025-05-04T07:14:57Z"
--- tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer widget: - text: 'Review on Quantum Computing for Lattice Field Theory In these proceedings, we review recent advances in applying quantum computing to lattice field theory. Quantum computing offers the prospect to simulate lattice field theories in parameter regimes that are largely inaccessible with the conventional Monte Carlo approach, such as the sign-problem afflicted regimes of finite baryon density, topological terms, and out-of-equilibrium dynamics. First proof-of-concept quantum computations of lattice gauge theories in (1+1) dimensions have been accomplished, and first resource-efficient quantum algorithms for lattice gauge theories in (1+1) and (2+1) dimensions have been developed. The path towards quantum computations of (3+1)-dimensional lattice gauge theories, including Lattice QCD, requires many incremental steps of improving both quantum hardware and quantum algorithms. After reviewing these requirements and recent advances, we discuss the main challenges and future directions.' - text: "Beating full state tomography for unentangled spectrum estimation\nHow many\ \ copies of a mixed state $\\rho \\in \\mathbb{C}^{d \\times d}$ are\nneeded to\ \ learn its spectrum? To date, the best known algorithms for spectrum\nestimation\ \ require as many copies as full state tomography, suggesting the\npossibility\ \ that learning a state's spectrum might be as difficult as learning\nthe entire\ \ state. We show that this is not the case in the setting of\nunentangled measurements,\ \ by giving a spectrum estimation algorithm that uses\n$n = O(d^3\\cdot (\\log\\\ log(d) / \\log(d))^4 )$ copies of $\\rho$, which is\nasymptotically fewer than\ \ the $n = \\Omega(d^3)$ copies necessary for full state\ntomography. Our algorithm\ \ is inspired by the technique of local moment matching\nfrom classical statistics,\ \ and shows how it can be applied in the quantum\nsetting.\n As an important\ \ subroutine in our spectrum estimation algorithm, we give an\nestimator of the\ \ $k$-th moment $\\operatorname{tr}(\\rho^k)$ which performs\nunentangled measurements\ \ and uses $O(d^{3-2/k})$ copies of $\\rho$ in order to\nachieve a constant multiplicative\ \ error. This directly translates to an\nadditive-error estimator of quantum Renyi\ \ entropy of order $k$ with the same\nnumber of copies.\n Finally, we present\ \ numerical evidence that the sample complexity of spectrum\nestimation can only\ \ improve over full state tomography by a sub-polynomial\nfactor. Specifically,\ \ for spectrum learning with fully entangled measurements,\nwe run simulations\ \ which suggest a lower bound of $\\Omega(d^{2 - \\gamma})$\ncopies for any constant\ \ $\\gamma > 0$. From this, we conclude the current best\nlower bound of $\\Omega(d)$\ \ is likely not tight." - text: 'Automated Bug Report Prioritization in Large Open-Source Projects Large open-source projects receive a large number of issues (known as bugs), including software defect (i.e., bug) reports and new feature requests from their user and developer communities at a fast rate. The often limited project resources do not allow them to deal with all issues. Instead, they have to prioritize them according to the project''s priorities and the issues'' severities. In this paper, we propose a novel approach to automated bug prioritization based on the natural language text of the bug reports that are stored in the open bug repositories of the issue-tracking systems. We conduct topic modeling using a variant of LDA called TopicMiner-MTM and text classification with the BERT large language model to achieve a higher performance level compared to the state-of-the-art. Experimental results using an existing reference dataset containing 85,156 bug reports of the Eclipse Platform project indicate that we outperform existing approaches in terms of Accuracy, Precision, Recall, and F1-measure of the bug report priority prediction.' - text: "Nearby open clusters with tidal features: golden sample selection and 3D\n\ \ structure\nOpen clusters offer unique opportunities to study stellar dynamics\ \ and\nevolution under the influence of their internal gravity, the Milky Way's\n\ gravitational field, and the interactions with encounters. Using the Gaia DR3\n\ data for a catalog of open clusters within 500 parsecs that exhibit tidal\nfeatures\ \ reported by the literature, we apply a novel method based on 3D\nprincipal component\ \ analysis to select a ``golden sample'' of nearby open\nclusters with minimal\ \ line-of-sight distortions. This approach ensures a\nsystematic comparison of\ \ 3D and 2D structural parameters for tidally perturbed\nclusters. The selected\ \ golden sample includes Blanco 1, Melotte 20, Melotte 22,\nNGC 2632, NGC 7092,\ \ NGC 1662, Roslund 6 and Melotte 111. We analyze these\nclusters by fitting both\ \ 2D and 3D King Profiles to their stellar density\ndistributions. Our results\ \ reveal systematic discrepancies: most of the golden\nsample clusters exhibit\ \ larger 3D tidal radii compared to their 2D\ncounterparts, demonstrating that\ \ the 2D projection effects bias the measured\ncluster size. Furthermore, the\ \ 3D density profiles show stronger deviations\nfrom King profiles at the tidal\ \ radii ($\\Delta \\rho_{\\rm 3D} > \\Delta \\rho_{\\rm\n2D}$), highlighting enhanced\ \ sensitivity to tidal disturbances. Additionally,\nwe investigate the spatial\ \ distribution of cluster members relative to their\nbulk motion in the Galactic\ \ plane. We find that some clusters exhibit tidal\nfeatures oriented perpendicular\ \ to their direction of motion, which can be\nattributed to the fact that the\ \ current surveys only detect the curved inner\nregions of the tidal features.\ \ In conclusion, this work offers a golden sample\nof nearby open clusters that\ \ are most reliable for 3D structure analysis and\nunderscores the necessity of\ \ 3D analysis in characterizing OC morphological\nasymmetries, determining cluster\ \ size, and identifying tidal features." - text: "Revisiting the physical properties of (LaS)1+d(NbS2) misfit-layered\n compounds\n\ Electrical transport in polycrystalline and single-crystalline (LaS)1+d(NbS2)\n\ misfit-layered compounds was measured. Polycrystalline samples were synthesized\n\ using S raw materials of different purities (2N or 6N), and single-crystalline\n\ samples were grown using two types of transport agents (2NH4Cl+PbCl2 or NH4Cl)\n\ via the chemical vapor transport method. The temperature dependence on\nresistivity\ \ dropped at 1.3-2.0 K for some of the samples, which might be\naffected by the\ \ unknown impurity. (LaS)1+d(NbS2) misfit-layered compounds for\nthe main phase\ \ of those obtained samples exhibited no superconductivity above\n0.2 K by the\ \ resistivity measurement." metrics: - f1 pipeline_tag: text-classification library_name: setfit inference: true base_model: sentence-transformers/all-mpnet-base-v2 model-index: - name: SetFit with sentence-transformers/all-mpnet-base-v2 results: - task: type: text-classification name: Text Classification dataset: name: Unknown type: unknown split: test metrics: - type: f1 value: 0.5294216467829347 name: F1 --- # SetFit with sentence-transformers/all-mpnet-base-v2 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance is used for classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Model Details ### Model Description - **Model Type:** SetFit - **Sentence Transformer body:** [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) - **Classification head:** a [LogisticRegression](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html) instance - **Maximum Sequence Length:** 384 tokens - **Number of Classes:** 20 classes <!-- - **Training Dataset:** [Unknown](https://huggingface.co/datasets/unknown) --> <!-- - **Language:** Unknown --> <!-- - **License:** Unknown --> ### Model Sources - **Repository:** [SetFit on GitHub](https://github.com/huggingface/setfit) - **Paper:** [Efficient Few-Shot Learning Without Prompts](https://arxiv.org/abs/2209.11055) - **Blogpost:** [SetFit: Efficient Few-Shot Learning Without Prompts](https://huggingface.co/blog/setfit) ### Model Labels | Label | Examples | |:------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | 18 | <ul><li>"Practical Application of the Quantum Carleman Lattice Boltzmann Method\n in Industrial CFD Simulations\nComputational Fluid Dynamics simulations are crucial in industrial\napplications but require extensive computational resources, particularly for\nextreme turbulent regimes. While classical digital approaches remain the\nstandard, quantum computing promises a breakthrough by enabling a more\nefficient encoding of large-scale simulations with a limited number of qubits.\n This work presents a practical numerical assessment of a hybrid\nquantum-classical approach to CFD based on the Lattice Boltzmann Method (LBM).\nThe inherently non-linear LBM equations are linearized via a Carleman expansion\nand solved using the quantum Harrow Hassidim Lloyd algorithm (HHL). We evaluate\nthis method on three benchmark cases featuring different boundary conditions,\nperiodic, bounceback, and moving wall, using statevector emulation on\nhigh-performance computing resources.\n Our results confirm the validity of the approach, achieving median error\nfidelities on the order of $10^{-3}$ and success probabilities sufficient for\npractical quantum state sampling. Notably, the spectral properties of small\nlattice systems closely approximate those of larger ones, suggesting a pathway\nto mitigate one of HHL's bottlenecks: eigenvalue pre-evaluation."</li><li>'On the Generalization of Adversarially Trained Quantum Classifiers\nQuantum classifiers are vulnerable to adversarial attacks that manipulate\ntheir input classical or quantum data. A promising countermeasure is\nadversarial training, where quantum classifiers are trained by using an\nattack-aware, adversarial loss function. This work establishes novel bounds on\nthe generalization error of adversarially trained quantum classifiers when\ntested in the presence of perturbation-constrained adversaries. The bounds\nquantify the excess generalization error incurred to ensure robustness to\nadversarial attacks as scaling with the training sample size $m$ as\n$1/\\sqrt{m}$, while yielding insights into the impact of the quantum embedding.\nFor quantum binary classifiers employing \\textit{rotation embedding}, we find\nthat, in the presence of adversarial attacks on classical inputs $\\mathbf{x}$,\nthe increase in sample complexity due to adversarial training over conventional\ntraining vanishes in the limit of high dimensional inputs $\\mathbf{x}$. In\ncontrast, when the adversary can directly attack the quantum state\n$\\rho(\\mathbf{x})$ encoding the input $\\mathbf{x}$, the excess generalization\nerror depends on the choice of embedding only through its Hilbert space\ndimension. The results are also extended to multi-class classifiers. We\nvalidate our theoretical findings with numerical experiments.'</li><li>'Coupled Instantons In A Four-Well Potential With Application To The\n Tunneling Of A Composite Particle\nCoupled instantons are introduced by generalizing the double well potential\nto multiple mutually coupled wells. Physically this corresponds to the\nsimultaneous tunneling of multiple degrees of freedom. A system with four equal\nminima is examined in detail. It has three instanton types or flavors with\ndistinct actions. For weak coupling and subject to there being a single large\n(or small) parameter, the interactive system can be handled perturbatively. The\nzero mode problem arising from time translation symmetry is handled via the\nFadeev-Popov procedure. A diagrammatic procedure allows corrections to the\nfluctuation determinant to be calculated systematically. Independent instanton\ncontributions are summed over by extending the dilute gas approximation to\nthree flavors and energy splittings of the lowest four states is calculated.\nAll tunneling amplitudes are concisely expressed in terms of elementary\nfunctions. While the model is possibly useful for a variety of physical\nsystems, an application is made here to the tunneling of a composite particle\nin one dimension.'</li></ul> | | 7 | <ul><li>'Scalar and tensor charmonium resonances in coupled-channel scattering\n from QCD\nWe determine $J^{PC}=0^{++}$ and $2^{++}$ hadron-hadron scattering amplitudes\nin the charmonium energy region up to 4100 MeV using lattice QCD, a\nfirst-principles approach to QCD. Working at $m_\\pi\\approx 391$ MeV, more than\n200 finite-volume energy levels are computed and these are used in extensions\nof the L\\"uscher formalism to determine infinite-volume coupled-channel\nscattering amplitudes. We find that this energy region contains a single\n$\\chi_{c0}$ and a single $\\chi_{c2}$ resonance. Both are found as pole\nsingularities on the closest unphysical Riemann sheet, just below 4000 MeV with\nwidths around 70 MeV. The largest couplings are to kinematically-closed $D^*\n\\bar{D}^*$ channels in $S$-wave, and couplings to several decay channels\nconsisting of pairs of open-charm mesons are found to be large and significant\nin both cases. Above the ground state $\\chi_{c0}$, no other scalar bound-states\nor near-$D\\bar{D}$ threshold resonances are found, in contrast to several\ntheoretical and experimental studies.'</li><li>'Quasi-degenerate baryon energy states, the Feynman--Hellmann theorem and\n transition matrix elements\nThe standard method for determining matrix elements in lattice QCD requires\nthe computation of three-point correlation functions. This has the disadvantage\nof requiring two large time separations: one between the hadron source and\noperator and the other from the operator to the hadron sink. Here we consider\nan alternative formalism, based on the Dyson expansion leading to the\nFeynman-Hellmann theorem, which only requires the computation of two-point\ncorrelation functions. Both the cases of degenerate energy levels and\nquasi-degenerate energy levels which correspond to diagonal and transition\nmatrix elements respectively can be considered in this formalism. As an example\nnumerical results for the Sigma to Nucleon vector transition matrix element are\npresented.'</li><li>"Beyond Generalized Eigenvalues in Lattice Quantum Field Theory\nTwo analysis techniques, the generalized eigenvalue method (GEM) or Prony's\n(or related) method (PM), are commonly used to analyze statistical estimates of\ncorrelation functions produced in lattice quantum field theory calculations.\nGEM takes full advantage of the matrix structure of correlation functions but\nonly considers individual pairs of time separations when much more data exists.\nPM can be applied to many time separations and many individual matrix elements\nsimultaneously but does not fully exploit the matrix structure of the\ncorrelation function. We combine both these methods into a single framework\nbased on matrix polynomials. As these algebraic methods are well known for\nproducing extensive spectral information about statistically-noisy data, the\nmethod should be paired with some information criteria, like the recently\nproposed Bayesean model averaging."</li></ul> | | 12 | <ul><li>'Persistence of chimera states and the challenge for synchronization in\n real-world networks\nThe emergence of order in nature manifests in different phenomena, with\nsynchronization being one of the most representative examples. Understanding\nthe role played by the interactions between the constituting parts of a complex\nsystem in synchronization has become a pivotal research question bridging\nnetwork science and dynamical systems. Particular attention has been paid to\nthe emergence of chimera states, where subsets of synchronized oscillations\ncoexist with asynchronous ones. Such coexistence of coherence and incoherence\nis a perfect example where order and disorder can persist in a long-lasting\nregime. Although considerable progress has been made in recent years to\nunderstand such coherent and (coexisting) incoherent states, how they manifest\nin real-world networks remains to be addressed. Based on a symmetry-breaking\nmechanism, in this paper, we shed light on the role that non-normality, a\nubiquitous structural property of real networks, has in the emergence of\nseveral diverse dynamical phenomena, e.g., amplitude chimeras or oscillon\npatterns. Specifically, we demonstrate that the prevalence of source or leader\nnodes in networks leads to the manifestation of phase chimera states.\nThroughout the paper, we emphasize that non-normality poses ongoing challenges\nto global synchronization and is instrumental in the emergence of chimera\nstates.'</li><li>'Nonlinear dynamical systems: Time reversibility {\\it versus} sensitivity\n to the initial conditions\nTime reversal of vast classes of phenomena has direct implications with\npredictability, causality and the second principle of thermodynamics. We\nanalyze in detail time reversibility of a paradigmatic dissipative nonlinear\ndynamical system, namely the logistic map $x_{t+1}=1-ax_t^2$. A close relation\nis revealed between time reversibility and the sensitivity to the initial\nconditions. Indeed, depending on the initial condition and the size of the time\nseries, time reversal can enable the recovery, within a small error bar, of\npast information when the Lyapunov exponent is non-positive, notably at the\nFeigenbaum point (edge of chaos), where weak chaos is known to exist. Past\ninformation is gradually lost for increasingly large Lyapunov exponent (strong\nchaos), notably at $a=2$ where it attains a large value. These facts open the\ndoor to diverse novel applications in physicochemical, astronomical, medical,\nfinancial, and other time series.'</li><li>'Sakaguchi Swarmalators\nSwarmalators are phase oscillators that cluster in space, like fireflies\nflashing on a swarm to attract mates. Interactions between particles, which\ntend to synchronize their phases and align their motion, decrease with the\ndistance and phase difference between them, coupling the spatial and phase\ndynamics. In this work, we explore the effects of disorder induced by phase\nfrustration on a system of Swarmalators that move on a one-dimensional ring.\nOur model is inspired by the well-known Kuramoto-Sakaguchi equations. We find,\nnumerically and analytically, the ordered and disordered states that emerge in\nthe system. The active states, not present in the model without disorder,\nresemble states found previously in numerical studies for the 2D Swarmalators\nsystem. One of these states, in particular, shows similarities to turbulence\ngenerated in a flattened media. We show that all ordered states can be\ngenerated for any values of the coupling constants by tuning the phase\nfrustration parameters only. Moreover, many of these combinations display\nmulti-stability.'</li></ul> | | 15 | <ul><li>"MetasurfaceViT: A generic AI model for metasurface inverse design\nMetasurfaces, sub-wavelength artificial structures, can control light's\namplitude, phase, and polar ization, enabling applications in efficient\nimaging, holograms, and sensing. Recent years, AI has witnessed remarkable\nprogress and spurred scientific discovery. In metasurface design, optical\ninverse design has recently emerged as a revolutionary approach. It uses deep\nlearning to create a nonlinear mapping between optical structures and\nfunctions, bypassing time-consuming traditional design and attaining higher\naccuracy. Yet, current deep-learning models for optical design face\nlimitations. They often work only for fixed wavelengths and polarizations, and\nlack universality as input-output vector size changes may require retraining.\nThere's also a lack of compatibility across different application scenarios.\nThis paper introduces MetasurfaceViT, a revolutionary generic AI model. It\nleverages a large amount of data using Jones matrices and physics-informed data\naugmentation. By pre-training through masking wavelengths and polarization\nchannels, it can reconstruct full-wavelength Jones matrices, which will be\nutilized by fine-tuning model to enable inverse design. Finally, a tandem\nworkflow appended by a forward prediction network is introduced to evaluate\nperformance. The versatility of MetasurfaceViT with high prediction accuracy\nwill open a new paradigm for optical inverse design."</li><li>'A hybrid U-Net and Fourier neural operator framework for the large-eddy\n simulation of turbulent flows over periodic hills\nAccurate and efficient predictions of three-dimensional (3D) turbulent flows\nare of significant importance in the fields of science and engineering. In the\ncurrent work, we propose a hybrid U-Net and Fourier neural operator (HUFNO)\nmethod, tailored for mixed periodic and non-periodic boundary conditions which\nare often encountered in complex turbulence problems. The HUFNO model is tested\nin the large-eddy simulation (LES) of 3D periodic hill turbulence featuring\nstrong flow separations. Compared to the original Fourier neural operator (FNO)\nand the convolutional neural network (CNN)-based U-Net framework, the HUFNO\nmodel has a higher accuracy in the predictions of the velocity field and\nReynolds stresses. Further numerical experiments in the LES show that the HUFNO\nframework outperforms the traditional Smagorinsky (SMAG) model and the\nwall-adapted local eddy-viscosity (WALE) model in the predictions of the\nturbulence statistics, the energy spectrum, the wall stresses and the flow\nseparation structures, with much lower computational cost. Importantly, the\naccuracy and efficiency are transferable to unseen initial conditions and hill\nshapes, underscoring its great potentials for the fast prediction of strongly\nseparated turbulent flows over curved boundaries.'</li><li>'Vortex droplets and lattice patterns in two-dimensional traps: A\n photonic spin-orbit-coupling perspective\nIn the context of the mean-field exciton-polariton (EP) theory with balanced\nloss and pump, we investigate the formation of lattice structures built of\nindividual vortex-antivortex (VAV) bound states under the action of the\ntwo-dimensional harmonic-oscillator (HO) potential trap and effective\nspin-orbit coupling (SOC), produced by the TE-TM splitting in the polariton\nsystem. The number of VAV elements (pixels) building the structures grow with\nthe increase of self- and cross-interaction coefficients. Depending upon their\nvalues and the trapping frequency, stable ring-shaped, circular, square-shaped,\nrectangular, pentagonal, hexagonal, and triangular patterns are produced, with\nthe central site left vacant or occupied in the lattice patterns of different\ntypes. The results suggest the experimental creation of the new patterns and\ntheir possible use for the design of integrated circuits in EP setups,\ncontrolled by the strengths of the TE-TM splitting, nonlinearity, and HO trap.'</li></ul> | | 8 | <ul><li>'Interplay of $95$ GeV Diphoton Excess and Dark Matter in Supersymmetric\n Triplet Model\nThe decay of the Higgs boson and the nature of dark matter remain fundamental\nchallenges in particle physics. We investigate the $95$ GeV diphoton excess and\ndark matter within the framework of the triplet-extended Minimal Supersymmetric\nStandard Model (TMSSM). In this model, an additional Hypercharge $Y=0$,\n$SU(2)_L$ triplet superfield is introduced. Mixing between the triplet and\ndoublet Higgs states enhances the diphoton signal strength of the $95$ GeV\nHiggs boson, resulting in $\\mu_{\\gamma\\gamma}^{\\text{CMS+ATLAS}} =\n0.24_{-0.08}^{+0.09}$, which is consistent with experimental observations. This\nenhancement arises primarily from charged Higgs loop contributions.\nAdditionally, the model accommodates viable dark matter candidates in the form\nof a bino-dominated neutralino. The relic density is reduced to the observed\nvalue through resonance-enhanced annihilation via the Higgs portal or\nco-annihilation with the triplino or higgsino. This reduction remains\nconsistent with constraints from direct and indirect detection experiments. A\ncomprehensive parameter scan demonstrates that the TMSSM can simultaneously\nexplain the $95$ GeV diphoton excess, the observed $125$ GeV Higgs mass, and\nthe dark matter relic density, establishing a compelling and theoretically\nconsistent framework.'</li><li>"Particles in finite volumes and a toy model of decaying neutrons\nIt is well-known that the momentum spectra of particles confined to finite\nspatial volumes deviate from the continuous spectra used for unconfined\nparticles. In this article, we consider real scalar particles confined to\nfinite volumes with periodic boundary conditions, such that the particles'\nspectra are discrete. We directly compute the density matrices describing the\ndecay processes $\\phi \\to \\varphi^2$ and $\\phi \\to \\varphi\\chi\\nu$, and\nsubsequently derive expressions for the decay probabilities both for confined\nand unconfined particles. The latter decay process is used as a rough toy model\nfor a neutron decaying into a proton, an electron, and an anti-electron\nneutrino. We propose that finite volume effects can have an impact on the\noutcomes of experiments measuring the neutron lifetime. In addition, our\nfindings at the toy model level suggest that taking into account possible\ninitial correlations between neutrons and their daughter particles might be\nrelevant as well."</li><li>'$B$ meson decays to vector charmonium(like) states and a $K$ meson: the\n role of final-state interactions\nA series of vector charmonium(like) states, accompanied by a $K$ meson, have\nbeen observed in the decays of $B$ meson. These processes are color-suppressed\nat the quark level, as inferred from topological diagram analysis. In this\nwork, we calculate the branching fractions of the decays $B \\to \\psi K$, where\n$\\psi$ denotes the charmonium(like) states $\\psi(1S)$, $\\psi(2S)$,\n$\\psi(4040)$, $\\psi(3770)$, and $\\psi(4160)$. Our analysis incorporates both\nshort-distance (naive factorization approach) and long-distance (final-state\ninteractions) contributions. Within reasonable parameters, our results align\nwith experimental data except for the $ \\psi(4160)$, suggesting its possible\nexotic nature. Furthermore, we find that long-distance contributions dominate\nthese decay processes, highlighting the crucial role of final-state\ninteractions in the productions of charmonium(like) states in $B$ decays.'</li></ul> | | 11 | <ul><li>"Approximation of Invariant Solutions to the Nonlinear Filtration\n Equation by Modified Pade Approximants\nThis paper deals with a mathematical model for oil filtration in a porous\nmedium and its self-similar and traveling wave regimes. The model consists of\nthe equation for conservation mass and dependencies for porosity, permeability,\nand oil density on pressure. The oil viscosity is considered to be the\nexperimentally expired parabolic relationship on pressure. To close the model,\ntwo types of Darcy law are used: the classic one and the dynamic one describing\nthe relaxation processes during filtration. In the former case, self-similar\nsolutions are studied, while in the latter case, traveling wave solutions are\nthe focus. Using the invariant solutions, the initial model is reduced to the\nnonlinear ordinary differential equations possessing the trajectories vanishing\nat infinity and representing the moving liquid fronts in porous media. To\napproximate these solutions, we elaborate the semi-analytic procedure based on\nmodified Pade approximants. In fact, we calculate sequentially Pade\napproximants up to 3d order for a two-point boundary value problem on the\nsemi-infinite domain. A good agreement of evaluated Pade approximants and\nnumerical solutions is observed. The approach provides relatively simple\nquasi-rational expressions of solutions and can be easily adapted for other\ntypes of model's nonlinearity."</li><li>'Hamel equations and quasivelocities for nonholonomic systems with\n inequality constraints\nIn this paper we derive Hamel equations for the motion of nonholonomic\nsystems subject to inequality constraints in quasivelocities. As examples, the\nvertical rolling disk hitting a wall and the Chaplygin sleigh with a knife edge\nconstraint hitting a circular table are shown to illustrate the theoretical\nresults.'</li><li>'${\\mathsf D}^2={\\mathsf H}+1/4$ with point interactions\nLet ${\\mathsf D}$ and ${\\mathsf H}$ be the self-adjoint, one-dimensional\nDirac and Schr\\"odinger operators in $L^{2}(\\mathbb{R};\\mathbb{C}^{2})$ and\n$L^{2}(\\mathbb{R};\\mathbb{C})$ respectively. It is well known that, in absence\nof an external potential, the two operators are related through the equality\n${\\mathsf D}^2 = ({\\mathsf H} + \\frac{1}{4}){\\mathbb 1}$. We show that such a\nkind of relation also holds in the case of $n$-point singular perturbations:\ngiven any self-adjoint realization $\\widehat {\\mathsf D}$ of the formal sum\n${\\mathsf D}+\\sum_{k=1}^{n}\\gamma_{k}\\delta_{y_{k}}$, we explicitly determine\nthe self-adjoint realization $\\widehat{\\mathsf H}$ of ${\\mathsf H}{\\mathbb\n1}+\\sum_{k=1}^{n}(\\alpha_{k}\\delta_{y_{k}}+\\beta_{k}\\delta\'_{y_{k}})$ such that\n${\\widehat{\\mathsf D}}^2 = \\widehat{\\mathsf H} + \\frac{{\\mathbb 1}}{4}$. The\nfound correspondence preserves the subclasses of self-adjoint realizations\ncorresponding to both the local and the separating boundary conditions. Some\nconnections with supersymmetry are provided. The case of nonlocal boundary\nconditions allows the study of the relation ${\\mathsf D}^{2}={\\mathsf\nH}+\\frac14$ for quantum graphs with (at most) two ends; in particular, the\nsquare of the extension corresponding to Kirchhoff-type boundary conditions for\nthe Dirac operator on the graph gives the direct sum of two Schr\\"odinger\noperators on the same graph, one with the usual Kirchhoff boundary conditions\nand the other with a sort of reversed Kirchhoff ones.'</li></ul> | | 19 | <ul><li>'Rank-based transfer learning for high-dimensional survival data with\n application to sepsis data\nSepsis remains a critical challenge due to its high mortality and complex\nprognosis. To address data limitations in studying MSSA sepsis, we extend\nexisting transfer learning frameworks to accommodate transformation models for\nhigh-dimensional survival data. Specifically, we construct a measurement index\nbased on C-index for intelligently identifying the helpful source datasets, and\nthe target model performance is improved by leveraging information from the\nidentified source datasets via performing the transfer step and debiasing step.\nWe further provide an algorithm to construct confidence intervals for each\ncoefficient component. Another significant development is that statistical\nproperties are rigorously established, including $\\ell_1/\\ell_2$-estimation\nerror bounds of the transfer learning algorithm, detection consistency property\nof the transferable source detection algorithm and asymptotic theories for the\nconfidence interval construction. Extensive simulations and analysis of\nMIMIC-IV sepsis data demonstrate the estimation and prediction accuracy, and\npractical advantages of our approach, providing significant improvements in\nsurvival estimates for MSSA sepsis patients.'</li><li>'Ireland Topsoil Contamination Analysis: A Clustering Approach\nThis study investigates topsoil contamination in Ireland using geochemical\ndata from the Tellus Programme, analyzing 4,278 soil samples across 17,983\nsquare kilometer. The research employs CPF clustering with spatial constraints\nto classify samples into seven different groups, revealing distinct\ncontamination patterns.'</li><li>"Predicting and Mitigating Agricultural Price Volatility Using Climate\n Scenarios and Risk Models\nAgricultural price volatility challenges sustainable finance, planning, and\npolicy, driven by market dynamics and meteorological factors such as\ntemperature and precipitation. In India, the Minimum Support Price (MSP) system\nacts as implicit crop insurance, shielding farmers from price drops without\npremium payments. We analyze the impact of climate on price volatility for\nsoybean (Madhya Pradesh), rice (Assam), and cotton (Gujarat). Using ERA5-Land\nreanalysis data from the Copernicus Climate Change Service, we analyze\nhistorical climate patterns and evaluate two scenarios: SSP2.4.5 (moderate\ncase) and SSP5.8.5 (severe case). Our findings show that weather conditions\nstrongly influence price fluctuations and that integrating meteorological data\ninto volatility models enhances risk-hedging. Using the Exponential Generalized\nAutoregressive Conditional Heteroskedasticity (EGARCH) model, we estimate\nconditional price volatility and identify cross-correlations between weather\nand price volatility movements. Recognizing MSP's equivalence to a European put\noption, we apply the Black-Scholes model to estimate its implicit premium,\nquantifying its fiscal cost. We propose this novel market-based risk-hedging\nmechanism wherein the government purchases insurance equivalent to MSP,\nleveraging Black-Scholes for accurate premium estimation. Our results\nunderscore the importance of meteorological data in agricultural risk modeling,\nsupporting targeted insurance and strengthening resilience in agricultural\nfinance. This climate-informed financial framework enhances risk-sharing,\nstabilizes prices, and informs sustainable agricultural policy under growing\nclimate uncertainty."</li></ul> | | 17 | <ul><li>'Bitcoin: A life in crises\nIn this study, we investigate the BTC price time-series (17 August 2010-27\nJune 2021) and show that the 2017 pricing episode is not unique. We describe at\nleast ten new events, which occurred since 2010-2011 and span more than five\norders of price magnitudes ($US 1-$US 60k). We find that those events have a\nsimilar duration of approx. 50-100 days. Although we are not able to predict\ntimes of a price peak, we however succeed to approximate the BTC price\nevolution using a function that is similar to a Fibonacci sequence. Finally, we\ncomplete a comparison with other types of financial instruments (equities,\ncurrencies, gold) which suggests that BTC may be classified as an illiquid\nasset.'</li><li>'Econometric Model Using Arbitrage Pricing Theory and Quantile Regression\n to Estimate the Risk Factors Driving Crude Oil Returns\nThis work adopts a novel approach to determine the risk and return of crude\noil stocks by employing Arbitrage Pricing Theory (APT) and Quantile Regression\n(QR).The APT identifies the underlying risk factors likely to impact crude oil\nreturns.Subsequently, QR estimates the relationship between the factors and the\nreturns across different quantiles of the distribution. The West Texas\nIntermediate (WTI) crude oil price is used in this study as a benchmark for\ncrude oil prices. WTI price fluctuations can have a significant impact on the\nperformance of crude oil stocks and, subsequently, the global economy.To\ndetermine the proposed models stability, various statistical measures are used\nin this study.The results show that changes in WTI returns can have varying\neffects depending on market conditions and levels of volatility. The study\nhighlights the impact of structural discontinuities on returns, which can be\ncaused by changes in the global economy and the demand for crude oil.The\ninclusion of pandemic, geopolitical, and inflation-related explanatory\nvariables add uniqueness to this study as it considers current global events\nthat can affect crude oil returns.Findings show that the key factors that pose\nmajor risks to returns are industrial production, inflation, the global price\nof energy, the shape of the yield curve, and global economic policy\nuncertainty.This implies that while making investing decisions in WTI futures,\ninvestors should pay particular attention to these elements'</li><li>'Commodities Trading through Deep Policy Gradient Methods\nAlgorithmic trading has gained attention due to its potential for generating\nsuperior returns. This paper investigates the effectiveness of deep\nreinforcement learning (DRL) methods in algorithmic commodities trading. It\nformulates the commodities trading problem as a continuous, discrete-time\nstochastic dynamical system. The proposed system employs a novel\ntime-discretization scheme that adapts to market volatility, enhancing the\nstatistical properties of subsampled financial time series. To optimize\ntransaction-cost- and risk-sensitive trading agents, two policy gradient\nalgorithms, namely actor-based and actor-critic-based approaches, are\nintroduced. These agents utilize CNNs and LSTMs as parametric function\napproximators to map historical price observations to market\npositions.Backtesting on front-month natural gas futures demonstrates that DRL\nmodels increase the Sharpe ratio by $83\\%$ compared to the buy-and-hold\nbaseline. Additionally, the risk profile of the agents can be customized\nthrough a hyperparameter that regulates risk sensitivity in the reward function\nduring the optimization process. The actor-based models outperform the\nactor-critic-based models, while the CNN-based models show a slight performance\nadvantage over the LSTM-based models.'</li></ul> | | 10 | <ul><li>'The Hao-Ng isomorphism theorem for reduced crossed products\nWe prove the Hao-Ng isomorphism for reduced crossed products by locally\ncompact Hausdorff groups. More precisely, for a non-degenerate\n$\\mathrm{C}^*$-correspondence $X$ and a generalized gauge action $G\n\\curvearrowright X$ by a locally compact Hausdorff group $G$, we prove the\ncommutation ${\\mathcal{O}}_{X\\rtimes_rG}\\cong {\\mathcal{O}}_X\\rtimes_rG$ of the\nreduced crossed product with the Cuntz-Pimsner C*-algebra construction.'</li><li>"A p-adaptive polytopal discontinuous Galerkin method for high-order\n approximation of brain electrophysiology\nMultiscale mathematical models have shown great promise in computational\nbrain electrophysiology but are still hindered by high computational costs due\nto fast dynamics and complex brain geometries, requiring very fine\nspatio-temporal resolution. This paper introduces a novel p-adaptive\ndiscontinuous Galerkin method on polytopal grids (PolyDG) coupled with\nCrank-Nicolson time integration to approximate such models efficiently. The\np-adaptive method enhances local accuracy via dynamic, element-wise polynomial\nrefinement/de-refinement guided by a-posteriori error estimators. A novel\nclustering algorithm automatizes the selection of elements for adaptive\nupdates, further improving efficiency. A wide set of numerical tests, including\nepileptic seizure simulations in a sagittal section of a human brain stem,\ndemonstrate the method's ability to reduce computational load while maintaining\nthe accuracy of the numerical solution in capturing the dynamics of multiple\nwavefronts."</li><li>'On $L^α$-flatness of Erdős-Littlewood\'s polynomials\nIt is shown that Erd\\"{o}s--Littlewood\'s polynomials are not $L^\\alpha$-flat\nwhen $\\alpha > 2$ is an even integer (and hence for any $\\alpha \\geq 4$). This\nprovides a partial solution to an old problem posed by Littlewood.\nConsequently, we obtain a positive answer to the analogous Erd\\"{o}s--Newman\nconjecture for polynomials with coefficients $\\pm 1$; that is, there is no\nultraflat sequence of polynomials from the class of Erd\\"{o}s--Littlewood\npolynomials.\n Our proof is short and simple. It relies on the classical lemma for $L^p$\nnorms of the Dirichlet kernel, the Marcinkiewicz--Zygmund interpolation\ninequalities, and the $p$-concentration theorem due to A. Bonami and S.\nR\\\'ev\\\'esz.'</li></ul> | | 14 | <ul><li>'Statistical approach of nuclear multifragmentation with realistic\n nuclear equation of state\nIn this work, Canonical Thermodynamical model for nuclear multifragmentation\nhas been updated with realistic nuclear equation of state. Mass distribution,\nintermediate mass fragment multiplicity as well as isospin sensitive\nobservables have been investigated with semi-microscopic approach of\ndetermining nuclear binding and excitation energies. Production of neutron rich\nisotopes as well as isoscaling and isobaric yield ratio parameters have been\nsignificantly modified due to inclusion of this realistic nuclear equation of\nstate.'</li><li>'Impact of MvdW Equation of State and Neutrino Mass on r and s Process\n Heavy Element Nucleosynthesis in Spiral, Elliptical and Dwarf Galactic\n Environments and Kilonovae Events\nWe present an analysis of heavy element production with massive neutrinos in\ngalaxies of varying types (spiral, elliptical, and dwarf) and kilonovae events\nby incorporating a Multicomponent van der Waals (MvdW) equation of state (EoS)\nfor the opacity functions. This EoS is applied to derive opacities and\ncalculate the yields of isotopes formed in r-process and s-process\nnucleosynthesis, with and without the influence of neutrino masses or\noscillations. We look at both the lanthanide and actinide sequences using the\nMvdW parameters that involve the interaction strength and excluded volume\neffects. Our results reflect the characteristic differences found in r and s\nprocesses in the synthesis and long-term evolution of isotopes from the U, Th,\nand Sr chain across galactic environments. The inclusion of neutrino masses\nenhances the neutron-to-proton ratio, favoring heavier r-process isotopes and\naltering the overall galactic yields by cross section suppression. These\nfindings offer insights into the interplay of nuclear physics and astrophysical\nenvironments, highlighting the sensitivity of nucleosynthetic pathways to EoS\nmodifications and neutrino physics. We compare these results to metallicity\nprofiles of similar models: the Galactic Leaky Box, the Galactic Inflow, and\nthe Galactic Closed Box models and to the kilonova event GW170781.'</li><li>'Effects of magnetic field on the evolution of energy density\n fluctuations\nWe study the effects of a static and uniform magnetic field on the evolution\nof energy density fluctuations present in a medium. By numerically solving the\nrelativistic Boltzmann-Vlasov equation within the relaxation time\napproximation, we explicitly show that magnetic field can affect the\ncharacteristics of energy density fluctuations at the timescale the system\nachieves local thermodynamic equilibrium. A detailed momentum mode analysis of\nfluctuations reveals that magnetic field increases the damping of mode\noscillations, especially for the low momentum modes. This leads to a reduction\nin the ultraviolet (high momentum) cutoff of fluctuations and also slows down\nthe dissipation of relatively low momentum fluctuation modes. We discuss the\nphenomenological implications of our study on various sources of fluctuations\nin relativistic heavy-ion collisions.'</li></ul> | | 16 | <ul><li>'Investigation of Fractional Compartmental Models with Application to\n Amiodarone Drug Diffusion in Pharmacokinetics\nThis paper presents three fractional models formulated from a classical\nPharmacokinetics compartmental system: commensurable, non-commensurable, and\nimplicit non-commensurable models. Their distinguishing characteristics are\nfurther examined comprehensively. Because analytic solutions for such models\nare typically challenging to obtain, we study the application of the Fractional\nFinite Difference Method (FFDM) to simulate approximate solutions. The\ncharacteristic of the non-commensurable model is shown to be incompatible with\nthe concept of mass balance. However, it appeared to outlast fractional\ncalculus theory when simulating anomalous kinetics. We proved this by fitting\nthe proposed fractional and classical models to an experimental data set\n(amiodarone) and estimated the parameters using the least-square approach. The\nclassical model diverged, but the non-commensurable model predicted a fit\ncomparable to the other two fractional models. The fractional models described\nanomalous diffusion better than classical theories. The numerical results\nshowed that the proposed numerical method is equally efficient in solving any\ncomplex compartmental models, as they performed well in simulations for the\nclassic example of the model.'</li><li>'Stochastic trade-offs and the emergence of diversification in E. coli\n evolution experiments\nLaboratory experiments with bacterial colonies, under well-controlled\nconditions often lead to evolutionary diversification, where at least two\necotypes emerge from an initially monomorphic population. Empirical evidence\nsuggests that such "evolutionary branching" occurs stochastically, even under\nfixed and stable conditions. This stochastic nature is characterized by: (i)\noccurrence in a significant fraction, but not all, of experimental settings,\n(ii) emergence at widely varying times, and (iii) variable relative abundances\nof the resulting subpopulations across experiments. Theoretical approaches to\nunderstanding evolutionary branching under these conditions have been\npreviously developed within the (deterministic) framework of "adaptive\ndynamics." Here, we advance the understanding of the stochastic nature of\nevolutionary outcomes by introducing the concept of "stochastic trade-offs" as\nopposed to "hard" ones. The key idea is that the stochasticity of mutations\noccurs in a high-dimensional trait space and this translates into variability\nthat is constrained to a flexible tradeoff curve. By incorporating this\nadditional source of stochasticity, we are able to account for the observed\nempirical variability and make predictions regarding the likelihood of\nevolutionary branching under different conditions. This approach effectively\nbridges the gap between theoretical predictions and experimental observations,\nproviding insights into when and how evolutionary branching is more likely to\noccur in laboratory experiments.'</li><li>"Integrating experimental feedback improves generative models for\n biological sequences\nGenerative probabilistic models have shown promise in designing artificial\nRNA and protein sequences but often suffer from high rates of false positives,\nwhere sequences predicted as functional fail experimental validation. To\naddress this critical limitation, we explore the impact of reintegrating\nexperimental feedback into the model design process. We propose a\nlikelihood-based reintegration scheme, which we test through extensive\ncomputational experiments on both RNA and protein datasets, as well as through\nwet-lab experiments on the self-splicing ribozyme from the group I intron RNA\nfamily where our approach demonstrates particular efficacy. We show that\nintegrating recent experimental data enhances the model's capacity of\ngenerating functional sequences (e.g. from 6.7\\% to 63.7\\% of active designs at\n45 mutations). This feedback-driven approach thus provides a significant\nimprovement in the design of biomolecular sequences by directly tackling the\nfalse-positive challenge."</li></ul> | | 3 | <ul><li>'Endowments, patience types, and uniqueness in two-good HARA utility\n economies\nThis paper establishes a link between endowments, patience types, and the\nparameters of the HARA Bernoulli utility function that ensure equilibrium\nuniqueness in an economy with two goods and two impatience types with additive\nseparable preferences. We provide sufficient conditions that guarantee\nuniqueness of equilibrium for any possible value of $\\gamma$ in the HARA\nutility function\n$\\frac{\\gamma}{1-\\gamma}\\left(b+\\frac{a}{\\gamma}x\\right)^{1-\\gamma}$. The\nanalysis contributes to the literature on uniqueness in pure exchange economies\nwith two-goods and two agent types and extends the result in [4].'</li><li>'A Deep Learning Analysis of Climate Change, Innovation, and Uncertainty\nWe study the implications of model uncertainty in a climate-economics\nframework with three types of capital: "dirty" capital that produces carbon\nemissions when used for production, "clean" capital that generates no emissions\nbut is initially less productive than dirty capital, and knowledge capital that\nincreases with R\\&D investment and leads to technological innovation in green\nsector productivity. To solve our high-dimensional, non-linear model framework\nwe implement a neural-network-based global solution method. We show there are\nfirst-order impacts of model uncertainty on optimal decisions and social\nvaluations in our integrated climate-economic-innovation framework. Accounting\nfor interconnected uncertainty over climate dynamics, economic damages from\nclimate change, and the arrival of a green technological change leads to\nsubstantial adjustments to investment in the different capital types in\nanticipation of technological change and the revelation of climate damage\nseverity.'</li><li>'Exploration of legal implications of air and space travel for\n international and domestic travel and the Environment\nThe rapid growth of air and space travel in recent years has resulted in an\nincreased demand for legal regulation in the aviation and aerospace fields.\nThis paper provides an overview of air and space law, including the topics of\naircraft accident investigations, air traffic control, international borders\nand law, and the regulation of space activities. With the increasing complexity\nof air and space travel, it is important to understand the legal implications\nof these activities. This paper examines the various legal aspects of air and\nspace law, including the roles of national governments, international\norganizations, and private entities. It also provides an overview of the legal\nframeworks that govern these activities and the implications of international\nlaw. Finally, it considers the potential for future developments in the field\nof air and space law. This paper provides a comprehensive overview of the legal\naspects of air and space travel and their implications for international and\ndomestic travel, as well as for international business and other activities in\nthe air and space domains.'</li></ul> | | 5 | <ul><li>'Observational properties of regular black holes in Asymptotic Safety\nWe consider the observational properties of a spherically symmetric, static\nregular black hole within the framework of asymptotic safety (AS) as proposed\nby Bonanno et al. The metric resembles the Schwarzschild solution in the\nclassical limit. The departure from Schwarzschild at small scales is controlled\nby a single free parameter related to the ultraviolet (UV) cutoff of the\ntheory. We investigated null and time-like geodesics around the AS metric,\nincluding circular orbits, photon rings and lensing effects. In particular we\nfocused on the optical properties of thin accretion disks in the equatorial\nplane of the object and compared them with those of accretion disks in the\nSchwarzschild metric. We found that the radiation flux, luminosity, and\nefficiency of the accretion disk increase with the value of the free parameter.\nUsing a spacetime generic open-source relativistic ray-tracing code, we\nsimulate the K$\\alpha$ iron line profiles emitted by the disk and analyze their\ndeviation from that of the Schwarzschild geometry.'</li><li>"Backreaction in $f(R,G)$ Gravitational Waves\nWe present a comprehensive analysis of gravitational wave dynamics in\n$f(R,G)$ modified gravity, where $R$ is the Ricci scalar and $G$ the\nGauss-Bonnet invariant. By developing a scalar-tensor formulation with two\nauxiliary fields, we systematically investigate both the propagation and\nbackreaction of high-frequency gravitational waves in cosmological backgrounds.\nThe linearized field equations reveal how the Gauss-Bonnet term introduces new\ncurvature-dependent couplings between tensor and scalar degrees of freedom,\nleading to modified dispersion relations and distinctive wave propagation\neffects. On de Sitter backgrounds, we obtain exact decoupled equations for the\ntensor and scalar modes, demonstrating how the additional $G$-dependence alters\nboth the effective masses and energy transport mechanisms compared to pure\n$f(R)$ theories.\n Our derivation of the effective energy-momentum tensor extends Isaacson's\napproach to incorporate the novel scalar field contributions, revealing a\ncomplex hierarchy of characteristic length scales ($\\lambda$, $\\ell$, and\n$\\mathcal{L}$) that govern the backreaction dynamics. The resulting formalism\nsuggests potentially observable signatures in both the propagation (phase\nshifts, amplitude modulation) and stochastic background of gravitational waves.\nThese effects could be probed by next-generation detectors, offering new\nconstraints on the $f(R,G)$ coupling parameters. The theoretical framework\ndeveloped here provides a foundation for future studies of gravitational wave\ngeneration in modified gravity scenarios and their role in cosmological\nstructure formation."</li><li>'Stellar isotropic model in the symmetric teleparallel equivalent of\n general relativity theory\nRecently, the theory of symmetric teleparallel equivalent of general\nrelativity (STEGR) has gained much interest in the cosmology and astrophysics\ncommunity. Within this theory, we discuss the method of deriving a stellar\nisotropic model. In this respect, we implement the equations of motion of STEGR\ntheory to a spacetime that is symmetric in a spherical manner, resulting in a\nset of nonlinear differential equations with more unknowns than equations. To\nsolve this issue, we assume a special form of $g_{tt}$, and suppose a null\nvalue of the anisotropy to obtain the form of $g_{rr}$. We then investigate the\npossibility of obtaining an isotropic stellar model consistent with\nobservational data. To test the stability of our model, we apply the adiabatic\nindex and the Tolman-Oppenheimer-Volkoff equation. Furthermore, we examine our\nmodel using different observed values of radii and masses of pulsars, showing\nthat all of them fit in a consistent way.'</li></ul> | | 2 | <ul><li>"LLM-based Interactive Imitation Learning for Robotic Manipulation\nRecent advancements in machine learning provide methods to train autonomous\nagents capable of handling the increasing complexity of sequential\ndecision-making in robotics. Imitation Learning (IL) is a prominent approach,\nwhere agents learn to control robots based on human demonstrations. However, IL\ncommonly suffers from violating the independent and identically distributed\n(i.i.d) assumption in robotic tasks. Interactive Imitation Learning (IIL)\nachieves improved performance by allowing agents to learn from interactive\nfeedback from human teachers. Despite these improvements, both approaches come\nwith significant costs due to the necessity of human involvement. Leveraging\nthe emergent capabilities of Large Language Models (LLMs) in reasoning and\ngenerating human-like responses, we introduce LLM-iTeach -- a novel IIL\nframework that utilizes an LLM as an interactive teacher to enhance agent\nperformance while alleviating the dependence on human resources. Firstly,\nLLM-iTeach uses a hierarchical prompting strategy that guides the LLM in\ngenerating a policy in Python code. Then, with a designed similarity-based\nfeedback mechanism, LLM-iTeach provides corrective and evaluative feedback\ninteractively during the agent's training. We evaluate LLM-iTeach against\nbaseline methods such as Behavior Cloning (BC), an IL method, and CEILing, a\nstate-of-the-art IIL method using a human teacher, on various robotic\nmanipulation tasks. Our results demonstrate that LLM-iTeach surpasses BC in the\nsuccess rate and achieves or even outscores that of CEILing, highlighting the\npotential of LLMs as cost-effective, human-like teachers in interactive\nlearning environments. We further demonstrate the method's potential for\ngeneralization by evaluating it on additional tasks. The code and prompts are\nprovided at: https://github.com/Tubicor/LLM-iTeach."</li><li>"Lifecycle Management of Trustworthy AI Models in 6G Networks: The REASON\n Approach\nArtificial Intelligence (AI) is expected to play a key role in 6G networks\nincluding optimising system management, operation, and evolution. This requires\nsystematic lifecycle management of AI models, ensuring their impact on services\nand stakeholders is continuously monitored. While current 6G initiatives\nintroduce AI, they often fall short in addressing end-to-end intelligence and\ncrucial aspects like trust, transparency, privacy, and verifiability.\nTrustworthy AI is vital, especially for critical infrastructures like 6G. This\npaper introduces the REASON approach for holistically addressing AI's native\nintegration and trustworthiness in future 6G networks. The approach comprises\nAI Orchestration (AIO) for model lifecycle management, Cognition (COG) for\nperformance evaluation and explanation, and AI Monitoring (AIM) for tracking\nand feedback. Digital Twin (DT) technology is leveraged to facilitate real-time\nmonitoring and scenario testing, which are essential for AIO, COG, and AIM. We\ndemonstrate this approach through an AI-enabled xAPP use case, leveraging a DT\nplatform to validate, explain, and deploy trustworthy AI models."</li><li>"AdaptoVision: A Multi-Resolution Image Recognition Model for Robust and\n Scalable Classification\nThis paper introduces AdaptoVision, a novel convolutional neural network\n(CNN) architecture designed to efficiently balance computational complexity and\nclassification accuracy. By leveraging enhanced residual units, depth-wise\nseparable convolutions, and hierarchical skip connections, AdaptoVision\nsignificantly reduces parameter count and computational requirements while\npreserving competitive performance across various benchmark and medical image\ndatasets. Extensive experimentation demonstrates that AdaptoVision achieves\nstate-of-the-art on BreakHis dataset and comparable accuracy levels, notably\n95.3\\% on CIFAR-10 and 85.77\\% on CIFAR-100, without relying on any pretrained\nweights. The model's streamlined architecture and strategic simplifications\npromote effective feature extraction and robust generalization, making it\nparticularly suitable for deployment in real-time and resource-constrained\nenvironments."</li></ul> | | 0 | <ul><li>'Modified gravity realizations of quintom dark energy after DESI DR2\nWe investigate the realization of quintom scenario for dynamical dark energy\nwithin modified gravity theories that can efficiently fit the recent\nobservational datasets. Starting from a general effective field theory\nformulation of dark energy in metric-affine geometry, we derive the background\naction in unitary gauge and we demonstrate how both $f(T)$ and $f(Q)$ gravity\ncan naturally realize quintom behavior through appropriate forms and parameter\nchoices. Additionally, using the Gaussian process reconstruction of the latest\nDESI DR2 BAO data combined with SNe and CMB observations, we extract the\nreconstructed dark-energy equation-of-state parameter, showing that it exhibits\nquintom-type evolution, crossing the phantom divide from below. Moreover,\nthrough detailed parameter estimations and application of information criteria,\nwe compare the model with the quadratic one. Our results show that, due to its\nrich structure, modified gravity stands as one of the main candidates for the\nrealization of the data-favoured dynamical dark energy.'</li><li>'Detection of wave activity within a realistic 3D MHD quiet sun\n simulation\nContext. Tracing wave activity from the photosphere to the corona has\nimportant implications for coronal heating and prediction of the solar wind.\nDespite extensive theory and simulations, the detection of waves in realistic\nMHD simulations still presents a large challenge due to wave interaction, mode\nconversion, and damping mechanisms. Aims. We conducted this study to detect\nlocalised wave activity within a realistic MHD simulation of the solar\natmosphere by the Bifrost code. Methods. We present a new method of detecting\nthe most significant contributions of wave activity within localised areas of\nthe domain, aided by Discrete Fourier Transforms and frequency filtering. We\ncorrelate oscillations in the vertical & horizontal magnetic field, velocities\nparallel & perpendicular to the magnetic field, and pressure to infer the\nnature of the dominant wave modes. Results. Our method captures the most\npowerful frequencies and wavenumbers, as well as providing a new diagnostic for\ndamping processes. We infer the presence of magnetoacoustic waves in the\nboundaries of prominent chromospheric/coronal swirling features. We find these\nwaves are likely damped by viscous heating in the swirl boundaries,\ncontributing to heating in the upper atmosphere. Conclusions. Using the most\nsignificant frequencies decomposition, we highlight that energy can be\ntransported from the lower atmosphere to the upper atmosphere through waves and\nfluctuations along the swirl boundaries. Although further analysis is needed to\nconfirm these findings, our new method provides a path forward to investigate\nwave activity in the solar atmosphere'</li><li>'Is Lorentz invariance violation found?\nLorentz invariance violation (LIV) has long been recognized as an observable\nlow-energy signature of quantum gravity. In spite of a great effort to detect\nLIV effects, so far only lower bounds have been derived. The high energy\nphotons from the gamma ray burst GRB 221009A have been detected by the LHAASO\ncollaboration and one at ${\\cal E} \\simeq 251 \\, \\rm TeV$ by the Carpet\ncollaboration using a partial data set. Very recently, the Carpet collaboration\nhas completed the full data analysis, reporting further support for their\npreviously detected photon now at ${\\cal E} = 300^{+ 43}_{- 38} \\, {\\rm TeV}$,\nwhich manifestly clashes with conventional physics. Taking this result at face\nvalue, we derive the first evidence for LIV and we show that such a detection\ncannot be explained by axion-like particles (ALPs), which allow for the\nobservation of the highest energy photons detected by LHAASO. We also outline a\nscenario in which ALPs and LIV naturally coexist. If confirmed by future\nobservations our finding would represent the first positive result in quantum\ngravity phenomenology.'</li></ul> | | 9 | <ul><li>'Note on $q=2$ paraparticle SYK model\nWe investigate the $q=2$ SYK model with paraparticles (PSYK$_2$), analyzing\nits thermodynamics and spectral form factor (SFF) using random matrix theory.\nThe Hamiltonian is quadratic, with coupling coefficients randomly drawn from\nthe Gaussian Unitary Ensemble (GUE). The model exhibits self-averaging behavior\nand shows a striking transition in SFF dynamics: while the fermionic SYK$_2$\ndisplays a ramp behavior $\\mathcal{K}(t) \\sim e^{C_0 t}$ with $C_0 \\sim \\ln N$,\nthe paraparticle cases exhibit $C_0 \\sim \\mathcal{O}(1)$. These findings offer\nnew insights into quantum systems with exotic statistics.'</li><li>'Free field realization of the quantum toroidal algebra of\n $\\mathfrak{gl}_1$ with general levels\nWe present a unified free field realization of representations for the\nquantum toroidal algebra of $\\mathfrak{gl}_1$ with arbitrary levels,\nconstructed using six free boson fields. This realization arises from a\nspecialized factorization of the structure function within the defining\nrelations of the quantum toroidal algebra of $\\mathfrak{gl}_1$. Utilizing this\nfree field realization, we further develop intertwining operators for the\nalgebra of $\\mathfrak{gl}_1$.'</li><li>'AdS3 axion wormholes as stable contributions to the Euclidean\n gravitational path integral\nRecent work has demonstrated that Euclidean Giddings-Strominger axion\nwormholes are stable in asymptotically flat 4D Minkowski spacetime, suggesting\nthat they should, at least naively, be included as contributions in the quantum\ngravitational path integral. Such inclusion appears to lead to known wormhole\nparadoxes, such as the factorization problem. In this paper, we generalize\nthese results to AdS3 spacetime, where the axion is equivalent to a U(1) gauge\nfield. We explicitly construct the classical wormhole solutions, show their\nregularity and stability, and compute their actions for arbitrary ratios of the\nwormhole mouth radius to the AdS radius and across various topologies. Finally,\nWe discuss potential implications of these findings for the 3D gravitational\npath integral.'</li></ul> | | 13 | <ul><li>"Proton Charge Radius from Lepton Scattering\nProtons are bound states of the strong interaction governed by Quantum\nChromodynamics (QCD). Its charge radius ($r_{E}^{p}$) is an important quantity\nas it characterizes the spatial distribution of the proton's charge, which is\ncarried by the quarks. On the other hand, the proton charge radius is an\nessential physical input for the bound-state Quantum Electrodynamic (QED)\ncalculations for the hydrogen atomic energy levels. Nevertheless, the large\ndiscrepancy between $r_{E}^{p}$ measurements from muonic hydrogen spectroscopy,\nand those from $ep$ elastic scattering and ordinary hydrogen spectroscopy, have\nbeen puzzling physicists for over a decade. Tremendous efforts, in both\ntheoretical and experimental sides, have been dedicated to providing various\ninsights into this puzzle, yet certain issues still remain unresolved,\nparticularly in the field of lepton scatterings. This review will focus on\n$r_{E}^{p}$ measurements using lepton scatterings, the recent theoretical and\nexperimental developments in this field, as well as future experiments using\nthis technique."</li><li>'First observation of the $β$3$α$p decay of $^{13}\\mathrm{O}$\n via $β$-delayed charged-particle spectroscopy\nBackground: The $\\beta$-delayed proton-decay of $^{13}\\mathrm{O}$ has\npreviously been studied, but the direct observation of $\\beta$-delayed\n$\\alpha$+$\\alpha$+$\\alpha$+p decay has not been reported. Purpose: Observing\nrare 3$\\alpha$+p events from the decay of excited states in\n$^{13}\\mathrm{N}^{\\star}$ allows for a sensitive probe of exotic\nhighly-clustered configurations in $^{13}$N. Method: To measure the low-energy\nproducts following $\\beta$-delayed 3$\\alpha$p-decay, the TexAT Time Projection\nChamber was employed using the one-at-a-time $\\beta$-delayed charged-particle\nspectroscopy technique at the Cyclotron Institute, Texas A&M University.\nResults: A total of $1.9 \\times 10^{5}$ $^{13}\\mathrm{O}$ implantations were\nmade inside the TexAT Time Projection Chamber. 149 3$\\alpha$+p events were\nobserved yielding a $\\beta$-delayed 3$\\alpha+p$ branching ratio of 0.078(6)%.\nConclusion: Four previously unknown $\\alpha$-decaying states were observed, one\nwith a strong $^{9}\\mathrm{B(g.s)}+\\alpha$ characteristic at 11.3 MeV, one with\na $^{9}\\mathrm{B}(\\frac{1}{2}^{+})+\\alpha$ nature at 12.4 MeV, and another two\nthat are dominated by $^{9}\\mathrm{B}({\\frac{5}{2}}^{+})+\\alpha$ at 13.1 and\n13.7 MeV. Population of the $\\frac{1}{2}^{+}$ state in $^{9}\\mathrm{B}$ has\nbeen unambiguously seen, cementing the predicted existence of the mirror-state\nbased on the states observed in $^{9}\\mathrm{Be}$.'</li><li>"Measuring short-range correlations and quasi-elastic cross sections in\n A(e,e') at x>1 and modest Q$^2$\nWe present results from the Jefferson Lab E08-014 experiment, investigating\nshort-range correlations (SRC) through measurements of absolute inclusive\nquasi-elastic cross sections and their ratios. This study utilized 3.356 GeV\nelectrons scattered off targets including $^2$H, $^3$He, $^4$He, $^{12}$C,\n$^{40}$Ca, and $^{48}$Ca, at modest momentum transfers ($1.3 < Q^2 \\leq 2$\nGeV$^2$). Kinematics were selected to enhance the cross-section contribution\nfrom high-momentum nucleons originating from the strongly interacting,\nshort-distance components of two-nucleon SRCs (2N-SRCs), known to exhibit a\nuniversal structure across both light and heavy nuclei.We analyzed the A/$^2$H\nratio within the region dominated by 2N-SRCs to characterize the nuclear\ndependence of SRC contributions across various nuclei. Additionally, the\nA/$^3$He ratio was examined at kinematics sensitive to nucleons with even\nhigher momentum, aiming to identify signals indicative of three-nucleon SRCs\n(3N-SRCs). The traditional analysis method in the expected 3N-SRC region ($x >\n2$) did not yield a clear plateau; instead, the data diverged from the\npredicted 3N-SRC behavior as momentum transfer increased. However, when\nanalyzed in terms of the struck nucleon's light-cone momentum, the data\nexhibited the opposite trend, progressively approaching the predicted 3N-SRC\nplateau. These observations suggest that future measurements at higher energies\nmay facilitate a definitive isolation and identification of 3N-SRCs."</li></ul> | | 1 | <ul><li>'Effect of pressure on the transport properties and thermoelectric\n performance of Dirac semimetal ZrTe5\nIn this study, we have investigated and compared the effect of hydrostatic\npressure up to ~20 kbar on the transport properties of ZrTe5 single crystals\ngrown by chemical vapor transport (CVT) and flux methods. With the application\nof pressure, the electrical resistivity Rho(T) and thermopower S(T) of both\ncrystals were found to increase in the whole temperature range unlike the other\nknown thermoelectric materials, such as Bi2Te3, SnSe etc. This observation is\nsupported by the complementary first-principles band structure calculation as\nthe application of pressure widens the direct bandgap at {\\Gamma} point.\nMoreover, the analysis of the pressure dependent magneto-transport and\nShubnikov de-Hass oscillation results revealed an increase in carrier\nconcentration and effective mass along with the reduction of mobility as\npressure rises. Furthermore, with the application of pressure, the flux-grown\nZrTe5 crystals display a transition from unipolar to bipolar charge transport\nas evidenced by the emergence of resistivity peak at T* under high pressure,\nunlike the CVT-grown ZrTe5 crystals where the bipolar charge transport near its\ncharacteristic resistivity peak (Tp) remains unaffected.'</li><li>'Signatures of Candidate States of $ν=12/5$ in Shot Noise\nFractional quantum Hall (FQH) states are highly sought after because of their\nability to host non-abelian anyons, whose braiding statistics make them\nexcellent candidates for qubits in topological quantum computing. Multiple\ntheoretical studies on the $\\nu=\\frac{12}{5}$ FQH state predict various\nquasi-particle states hosted by the $\\frac{12}{5}$ plateau, which include\n$\\mathbb Z_3$ parafermions and Majorana modes. In this work, we provide a\nsystematic protocol to distinguish among four possible candidate wavefunctions\nof the $\\frac{12}{5}$ plateau using zero-frequency short noise experiments on a\nfilter-geometry. Qualitative comparisons of Fano-Factors provide a robust way\nto predict the candidate state across both the full and partial thermal\nequilibration regimes without prior knowledge of the experimental information,\nlike thermal equilibration length, to allow for more realistic experiments.'</li><li>'Performances in solving the Bethe-Salpeter equation with the Yambo code\nIn this work, we analyze the performances of two different strategies in\nsolving the structured eigenvalue problem deriving from the Bethe-Salpeter\nequation (BSE) in condensed matter physics. The first strategy employs direct\ndiagonalization, while the second is based on an iterative solver. The BSE\nmatrix is constructed with the Yambo code, and the two strategies are\nimplemented by interfacing Yambo with the ScaLAPACK and ELPA libraries for\ndirect diagonalization, and with the SLEPc library for the iterative approach.\nWe consider both the hermitian (Tamm-Dancoff approximation) and\npseudo-hermitian forms, addressing dense matrices of three different sizes. A\ndescription of the implementation is also provided, with details for the\npseudo-hermitian case. Timing and memory utilization are analyzed on both CPU\nand GPU clusters. The CPU simulations are performed on a local cluster in Rome,\nwhile the GPU simulations are performed on the Leonardo HPC cluster of CINECA.\nOur results demonstrate that it is now feasible to handle dense BSE matrices of\nthe order 10$^5$.'</li></ul> | | 4 | <ul><li>'Translation of Fetal Brain Ultrasound Images into Pseudo-MRI Images\n using Artificial Intelligence\nUltrasound is a widely accessible and cost-effective medical imaging tool\ncommonly used for prenatal evaluation of the fetal brain. However, it has\nlimitations, particularly in the third trimester, where the complexity of the\nfetal brain requires high image quality for extracting quantitative data. In\ncontrast, magnetic resonance imaging (MRI) offers superior image quality and\ntissue differentiation but is less available, expensive, and requires\ntime-consuming acquisition. Thus, transforming ultrasonic images into an\nMRI-mimicking display may be advantageous and allow better tissue anatomy\npresentation. To address this goal, we have examined the use of artificial\nintelligence, implementing a diffusion model renowned for generating\nhigh-quality images. The proposed method, termed "Dual Diffusion Imposed\nCorrelation" (DDIC), leverages a diffusion-based translation methodology,\nassuming a shared latent space between ultrasound and MRI domains. Model\ntraining was obtained utilizing the "HC18" dataset for ultrasound and the "CRL\nfetal brain atlas" along with the "FeTA " datasets for MRI. The generated\npseudo-MRI images provide notable improvements in visual discrimination of\nbrain tissue, especially in the lateral ventricles and the Sylvian fissure,\ncharacterized by enhanced contrast clarity. Improvement was demonstrated in\nMutual information, Peak signal-to-noise ratio, Fr\\\'echet Inception Distance,\nand Contrast-to-noise ratio. Findings from these evaluations indicate\nstatistically significant superior performance of the DDIC compared to other\ntranslation methodologies. In addition, a Medical Opinion Test was obtained\nfrom 5 gynecologists. The results demonstrated display improvement in 81% of\nthe tested images. In conclusion, the presented pseudo-MRI images hold the\npotential for streamlining diagnosis and enhancing clinical outcomes through\nimproved representation.'</li><li>'On Geometric Shaping for 400 Gbps IM-DD Links with Laser Intensity Noise\nWe propose geometric shaping for IM-DD links dominated by relative intensity\nnoise (RIN). For 400 Gbps links, our geometrically-shaped constellations result\nin error probability improvements that relaxes the RIN laser design by 3 dB.'</li><li>'System Level Synthesis for Affine Control Policies: Model Based and\n Data-Driven Settings\nThere is an increasing need for effective control of systems with complex\ndynamics, particularly through data-driven approaches. System Level Synthesis\n(SLS) has emerged as a powerful framework that facilitates the control of\nlarge-scale systems while accounting for model uncertainties. SLS approaches\nare currently limited to linear systems and time-varying linear control\npolicies, thus limiting the class of achievable control strategies. We\nintroduce a novel closed-loop parameterization for time-varying affine control\npolicies, extending the SLS framework to a broader class of systems and\npolicies. We show that the closed-loop behavior under affine policies can be\nequivalently characterized using past system trajectories, enabling a fully\ndata-driven formulation. This parameterization seamlessly integrates affine\npolicies into optimal control problems, allowing for a closed-loop formulation\nof general Model Predictive Control (MPC) problems. To the best of our\nknowledge, this is the first work to extend SLS to affine policies in both\nmodel-based and data-driven settings, enabling an equivalent formulation of MPC\nproblems using closed-loop maps. We validate our approach through numerical\nexperiments, demonstrating that our model-based and data-driven affine SLS\nformulations achieve performance on par with traditional model-based MPC.'</li></ul> | | 6 | <ul><li>'Jet energy calibration with deep learning as a Kubeflow pipeline\nPrecise measurements of the energy of jets emerging from particle collisions\nat the LHC are essential for a vast majority of physics searches at the CMS\nexperiment. In this study, we leverage well-established deep learning models\nfor point clouds and CMS open data to improve the energy calibration of\nparticle jets. To enable production-ready machine learning based jet energy\ncalibration an end-to-end pipeline is built on the Kubeflow cloud platform. The\npipeline allowed us to scale up our hyperparameter tuning experiments on cloud\nresources, and serve optimal models as REST endpoints. We present the results\nof the parameter tuning process and analyze the performance of the served\nmodels in terms of inference time and overhead, providing insights for future\nwork in this direction. The study also demonstrates improvements in both flavor\ndependence and resolution of the energy response when compared to the standard\njet energy corrections baseline.'</li><li>"Comparing and improving hybrid deep learning algorithms for identifying\n and locating primary vertices\nUsing deep neural networks to identify and locate proton-proton collision\npoints, or primary vertices, in LHCb has been studied for several years.\nPreliminary results demonstrated the ability for a hybrid deep learning\nalgorithm to achieve similar or better physics performances compared to\nstandard heuristic approaches. The previously studied architectures relied\ndirectly on hand-calculated Kernel Density Estimators (KDEs) as input features.\nCalculating these KDEs was slow, making use of the DNN inference engines in the\nexperiment's real-time analysis (trigger) system problematic. Here we present\nrecent results from a high-performance hybrid deep learning algorithm that uses\ntrack parameters as input features rather than KDEs, opening the path to\ndeployment in the real-time trigger system."</li><li>'The ECFA Roadmap Process for Particle Identification and Photon Detector\n R&D\nThe Detector R&D Roadmap for European Particle Physics was published in\nFebruary 2022. The outcome of the Roadmap process relating to particle\nidentification and photon detectors is summarised.'</li></ul> | ## Evaluation ### Metrics | Label | F1 | |:--------|:-------| | **all** | 0.5294 | ## Uses ### Direct Use for Inference First install the SetFit library: ```bash pip install setfit ``` Then you can load this model and run inference. ```python from setfit import SetFitModel # Download from the 🤗 Hub model = SetFitModel.from_pretrained("gpham/all-mpnet-base-v2-setfit-arxiv") # Run inference preds = model("Revisiting the physical properties of (LaS)1+d(NbS2) misfit-layered compounds Electrical transport in polycrystalline and single-crystalline (LaS)1+d(NbS2) misfit-layered compounds was measured. Polycrystalline samples were synthesized using S raw materials of different purities (2N or 6N), and single-crystalline samples were grown using two types of transport agents (2NH4Cl+PbCl2 or NH4Cl) via the chemical vapor transport method. The temperature dependence on resistivity dropped at 1.3-2.0 K for some of the samples, which might be affected by the unknown impurity. (LaS)1+d(NbS2) misfit-layered compounds for the main phase of those obtained samples exhibited no superconductivity above 0.2 K by the resistivity measurement.") ``` <!-- ### Downstream Use *List how someone could finetune this model on their own dataset.* --> <!-- ### Out-of-Scope Use *List how the model may foreseeably be misused and address what users ought not to do with the model.* --> <!-- ## Bias, Risks and Limitations *What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.* --> <!-- ### Recommendations *What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.* --> ## Training Details ### Training Set Metrics | Training set | Min | Median | Max | |:-------------|:----|:-------|:----| | Word count | 32 | 146.75 | 284 | | Label | Training Sample Count | |:------|:----------------------| | 0 | 8 | | 1 | 8 | | 2 | 8 | | 3 | 8 | | 4 | 8 | | 5 | 8 | | 6 | 8 | | 7 | 8 | | 8 | 8 | | 9 | 8 | | 10 | 8 | | 11 | 8 | | 12 | 8 | | 13 | 8 | | 14 | 8 | | 15 | 8 | | 16 | 8 | | 17 | 8 | | 18 | 8 | | 19 | 8 | ### Training Hyperparameters - batch_size: (16, 16) - num_epochs: (1, 1) - max_steps: -1 - sampling_strategy: oversampling - num_iterations: 20 - body_learning_rate: (2e-05, 2e-05) - head_learning_rate: 2e-05 - loss: CosineSimilarityLoss - distance_metric: cosine_distance - margin: 0.25 - end_to_end: False - use_amp: False - warmup_proportion: 0.1 - l2_weight: 0.01 - seed: 42 - eval_max_steps: -1 - load_best_model_at_end: False ### Training Results | Epoch | Step | Training Loss | Validation Loss | |:------:|:----:|:-------------:|:---------------:| | 0.0025 | 1 | 0.1259 | - | | 0.125 | 50 | 0.077 | - | | 0.25 | 100 | 0.0514 | - | | 0.375 | 150 | 0.0361 | - | | 0.5 | 200 | 0.0264 | - | | 0.625 | 250 | 0.0226 | - | | 0.75 | 300 | 0.0196 | - | | 0.875 | 350 | 0.0139 | - | | 1.0 | 400 | 0.0138 | - | | 0.05 | 1 | 0.0111 | - | | 0.125 | 50 | 0.0114 | - | | 0.25 | 100 | 0.0069 | - | | 0.375 | 150 | 0.0069 | - | | 0.5 | 200 | 0.0052 | - | | 0.625 | 250 | 0.0029 | - | | 0.75 | 300 | 0.0026 | - | | 0.875 | 350 | 0.0013 | - | | 1.0 | 400 | 0.0013 | - | ### Framework Versions - Python: 3.11.12 - SetFit: 1.1.2 - Sentence Transformers: 4.1.0 - Transformers: 4.48.3 - PyTorch: 2.7.0+cu126 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citation ### BibTeX ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ``` <!-- ## Glossary *Clearly define terms in order to be accessible across audiences.* --> <!-- ## Model Card Authors *Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.* --> <!-- ## Model Card Contact *Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.* -->
Hachipo/Meta-Llama-3-8B-MIFT-ja_10000_2
Hachipo
"2025-05-04T07:14:13Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "trl", "sft", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-05-04T07:10:09Z"
--- library_name: transformers tags: - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
sdfsdsssFBoss/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-swift_jumping_cheetah
sdfsdsssFBoss
"2025-05-04T07:12:28Z"
2
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am swift jumping cheetah", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-04-23T07:17:05Z"
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-swift_jumping_cheetah tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am swift jumping cheetah - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-swift_jumping_cheetah This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="sdfsdsssFBoss/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-swift_jumping_cheetah", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
mlfoundations-dev/no_pipeline_science_100k
mlfoundations-dev
"2025-05-04T07:09:13Z"
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-7B-Instruct", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-05-03T19:32:28Z"
--- library_name: transformers license: apache-2.0 base_model: Qwen/Qwen2.5-7B-Instruct tags: - llama-factory - full - generated_from_trainer model-index: - name: no_pipeline_science_100k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # no_pipeline_science_100k This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/no_pipeline_science_100k dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 8e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 32 - gradient_accumulation_steps: 16 - total_train_batch_size: 512 - total_eval_batch_size: 256 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.46.1 - Pytorch 2.5.1 - Datasets 3.1.0 - Tokenizers 0.20.3
DevQuasar/kyutai.helium-1-preview-2b-GGUF
DevQuasar
"2025-05-04T07:09:00Z"
0
0
null
[ "text-generation", "base_model:kyutai/helium-1-preview-2b", "base_model:finetune:kyutai/helium-1-preview-2b", "region:us" ]
text-generation
"2025-05-04T07:08:31Z"
--- base_model: - kyutai/helium-1-preview-2b pipeline_tag: text-generation --- [<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com) Quantized version of: [kyutai/helium-1-preview-2b](https://huggingface.co/kyutai/helium-1-preview-2b) 'Make knowledge free for everyone' <p align="center"> Made with <br> <a href="https://www.civo.com/" target="_blank"> <img src="https://www.civo.com/assets/public/brand-assets/civo-logo-colour-60cc1622dedf346f7afde1fff760523f731b0aac106a5465af98ff4073114b74.svg" width="100"/> </a> </p> <a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
ysn-rfd/gemma3_fibonacci_tokenizer
ysn-rfd
"2025-05-04T07:03:57Z"
0
0
transformers
[ "transformers", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2025-05-04T07:03:44Z"
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ysn-rfd/gemma3_fibonacci
ysn-rfd
"2025-05-04T07:03:44Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "gemma3", "trl", "en", "base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit", "base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2025-05-04T07:03:30Z"
--- base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - gemma3 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** ysn-rfd - **License:** apache-2.0 - **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
rosalinec/dqn-SpaceInvadersNoFrameskip-v4
rosalinec
"2025-05-04T07:02:12Z"
8
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
"2025-05-04T07:01:54Z"
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 53.50 +/- 45.17 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib SBX (SB3 + Jax): https://github.com/araffin/sbx Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga rosalinec -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga rosalinec -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga rosalinec ``` ## Hyperparameters ```python OrderedDict([('batch_size', 256), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
sourname/t5-small-empathetic-dialogues
sourname
"2025-05-04T06:58:45Z"
1
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
"2025-05-04T05:40:21Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
WhoCares258/code-search-net-tokenizer
WhoCares258
"2025-05-04T06:53:33Z"
0
0
transformers
[ "transformers", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2025-05-04T06:53:30Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mlfoundations-dev/d1_math_longest_10k
mlfoundations-dev
"2025-05-04T06:41:39Z"
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-7B-Instruct", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-05-03T13:10:09Z"
--- library_name: transformers license: apache-2.0 base_model: Qwen/Qwen2.5-7B-Instruct tags: - llama-factory - full - generated_from_trainer model-index: - name: d1_math_longest_10k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # d1_math_longest_10k This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/d1_math_longest_10k dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 32 - total_train_batch_size: 128 - total_eval_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.46.1 - Pytorch 2.6.0+cu124 - Datasets 3.1.0 - Tokenizers 0.20.3
khaledusmani/my-bpe-tokenizer
khaledusmani
"2025-05-04T06:40:10Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2025-05-04T06:33:13Z"
--- license: apache-2.0 ---
DevQuasar/kyutai.helium-1-2b-stem-GGUF
DevQuasar
"2025-05-04T06:28:43Z"
10
0
null
[ "gguf", "text-generation", "base_model:kyutai/helium-1-2b-stem", "base_model:quantized:kyutai/helium-1-2b-stem", "endpoints_compatible", "region:us" ]
text-generation
"2025-05-04T06:09:19Z"
--- base_model: - kyutai/helium-1-2b-stem pipeline_tag: text-generation --- [<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com) Quantized version of: [kyutai/helium-1-2b-stem](https://huggingface.co/kyutai/helium-1-2b-stem) 'Make knowledge free for everyone' <p align="center"> Made with <br> <a href="https://www.civo.com/" target="_blank"> <img src="https://www.civo.com/assets/public/brand-assets/civo-logo-colour-60cc1622dedf346f7afde1fff760523f731b0aac106a5465af98ff4073114b74.svg" width="100"/> </a> </p> <a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
sourname/bart-large-empathetic-dialogues
sourname
"2025-05-04T06:16:40Z"
0
0
transformers
[ "transformers", "safetensors", "bart", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
"2025-05-04T06:14:03Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
1245erty/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-jumping_lithe_scorpion
1245erty
"2025-05-04T06:09:34Z"
12
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am jumping lithe scorpion", "unsloth", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-04-20T16:38:45Z"
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-jumping_lithe_scorpion tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am jumping lithe scorpion - unsloth - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-jumping_lithe_scorpion This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="1245erty/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-jumping_lithe_scorpion", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
RRashmini/google-umt5-small-8
RRashmini
"2025-05-04T06:08:30Z"
0
0
transformers
[ "transformers", "safetensors", "umt5", "text2text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
"2025-05-04T06:07:40Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
mlfoundations-dev/e1_science_ms_qwq
mlfoundations-dev
"2025-05-04T06:07:51Z"
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-7B-Instruct", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-05-04T02:13:34Z"
--- library_name: transformers license: apache-2.0 base_model: Qwen/Qwen2.5-7B-Instruct tags: - llama-factory - full - generated_from_trainer model-index: - name: e1_science_ms_qwq results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # e1_science_ms_qwq This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/e1_science_ms_qwq dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 32 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - total_eval_batch_size: 256 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.46.1 - Pytorch 2.3.0 - Datasets 3.1.0 - Tokenizers 0.20.3
pytorch/Phi-4-mini-instruct-int4wo-hqq
pytorch
"2025-05-04T06:04:07Z"
831
0
transformers
[ "transformers", "pytorch", "phi3", "text-generation", "torchao", "phi", "phi4", "nlp", "code", "math", "chat", "conversational", "custom_code", "multilingual", "base_model:microsoft/Phi-4-mini-instruct", "base_model:quantized:microsoft/Phi-4-mini-instruct", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-04-08T04:31:34Z"
--- library_name: transformers tags: - torchao - phi - phi4 - nlp - code - math - chat - conversational license: mit language: - multilingual base_model: - microsoft/Phi-4-mini-instruct pipeline_tag: text-generation --- [Phi4-mini](https://huggingface.co/microsoft/Phi-4-mini-instruct) quantized with [torchao](https://huggingface.co/docs/transformers/main/en/quantization/torchao) int4 weight only quantization, using [hqq](https://mobiusml.github.io/hqq_blog/) algorithm for improved accuracy, by PyTorch team. Use it directly or serve using [vLLM](https://docs.vllm.ai/en/latest/) for 67% VRAM reduction and 12-20% speedup on A100 GPUs. # Inference with vLLM Need to install vllm nightly to get some recent changes: ``` pip install vllm --pre --extra-index-url https://wheels.vllm.ai/nightly ``` ## Code Example ```Py from vllm import LLM, SamplingParams # Sample prompts. prompts = [ "Hello, my name is", "The president of the United States is", "The capital of France is", "The future of AI is", ] # Create a sampling params object. sampling_params = SamplingParams(temperature=0.8, top_p=0.95) if __name__ == '__main__': # Create an LLM. llm = LLM(model="pytorch/Phi-4-mini-instruct-int4wo-hqq") # Generate texts from the prompts. # The output is a list of RequestOutput objects # that contain the prompt, generated text, and other information. outputs = llm.generate(prompts, sampling_params) # Print the outputs. print("\nGenerated Outputs:\n" + "-" * 60) for output in outputs: prompt = output.prompt generated_text = output.outputs[0].text print(f"Prompt: {prompt!r}") print(f"Output: {generated_text!r}") print("-" * 60) ``` Note: please use `VLLM_DISABLE_COMPILE_CACHE=1` to disable compile cache when running this code, e.g. `VLLM_DISABLE_COMPILE_CACHE=1 python example.py`, since there are some issues with the composability of compile in vLLM and torchao, this is expected be resolved in pytorch 2.8. ## Serving Then we can serve with the following command: ```Shell vllm serve pytorch/Phi-4-mini-instruct-int4wo-hqq --tokenizer microsoft/Phi-4-mini-instruct -O3 ``` # Inference with Transformers Install the required packages: ```Shell pip install git+https://github.com/huggingface/transformers@main pip install --pre torchao --index-url https://download.pytorch.org/whl/nightly/cu126 pip install torch pip install accelerate ``` Example: ```Py import torch from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline torch.random.manual_seed(0) model_path = "pytorch/Phi-4-mini-instruct-int4wo-hqq" model = AutoModelForCausalLM.from_pretrained( model_path, device_map="auto", torch_dtype="auto", trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained(model_path) messages = [ {"role": "system", "content": "You are a helpful AI assistant."}, {"role": "user", "content": "Can you provide ways to eat combinations of bananas and dragonfruits?"}, {"role": "assistant", "content": "Sure! Here are some ways to eat bananas and dragonfruits together: 1. Banana and dragonfruit smoothie: Blend bananas and dragonfruits together with some milk and honey. 2. Banana and dragonfruit salad: Mix sliced bananas and dragonfruits together with some lemon juice and honey."}, {"role": "user", "content": "What about solving an 2x + 3 = 7 equation?"}, ] pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, ) generation_args = { "max_new_tokens": 500, "return_full_text": False, "temperature": 0.0, "do_sample": False, } output = pipe(messages, **generation_args) print(output[0]['generated_text']) ``` # Quantization Recipe Install the required packages: ```Shell pip install git+https://github.com/huggingface/transformers@main pip install --pre torchao --index-url https://download.pytorch.org/whl/nightly/cu126 pip install torch pip install accelerate ``` Use the following code to get the quantized model: ``` import torch from transformers import AutoModelForCausalLM, AutoTokenizer, TorchAoConfig model_id = "microsoft/Phi-4-mini-instruct" from torchao.quantization import Int4WeightOnlyConfig quant_config = Int4WeightOnlyConfig(group_size=128, use_hqq=True) quantization_config = TorchAoConfig(quant_type=quant_config) quantized_model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", torch_dtype=torch.bfloat16, quantization_config=quantization_config) tokenizer = AutoTokenizer.from_pretrained(model_id) # Push to hub USER_ID = "YOUR_USER_ID" MODEL_NAME = model_id.split("/")[-1] save_to = f"{USER_ID}/{MODEL_NAME}-int4wo-hqq" quantized_model.push_to_hub(save_to, safe_serialization=False) tokenizer.push_to_hub(save_to) # Manual Testing prompt = "Hey, are you conscious? Can you talk to me?" messages = [ { "role": "system", "content": "", }, {"role": "user", "content": prompt}, ] templated_prompt = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, ) print("Prompt:", prompt) print("Templated prompt:", templated_prompt) inputs = tokenizer( templated_prompt, return_tensors="pt", ).to("cuda") generated_ids = quantized_model.generate(**inputs, max_new_tokens=128) output_text = tokenizer.batch_decode( generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print("Response:", output_text[0][len(prompt):]) # Local Benchmark import torch.utils.benchmark as benchmark from torchao.utils import benchmark_model import torchao def benchmark_fn(f, *args, **kwargs): # Manual warmup for _ in range(2): f(*args, **kwargs) t0 = benchmark.Timer( stmt="f(*args, **kwargs)", globals={"args": args, "kwargs": kwargs, "f": f}, num_threads=torch.get_num_threads(), ) return f"{(t0.blocked_autorange().mean):.3f}" torchao.quantization.utils.recommended_inductor_config_setter() quantized_model = torch.compile(quantized_model, mode="max-autotune") print(f"{save_to} model:", benchmark_fn(quantized_model.generate, **inputs, max_new_tokens=128)) ``` # Model Quality We rely on [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) to evaluate the quality of the quantized model. Need to install lm-eval from source: https://github.com/EleutherAI/lm-evaluation-harness#install ## baseline ```Shell lm_eval --model hf --model_args pretrained=microsoft/Phi-4-mini-instruct --tasks hellaswag --device cuda:0 --batch_size 8 ``` ## int4 weight only quantization with hqq (int4wo-hqq) ```Shell lm_eval --model hf --model_args pretrained=pytorch/Phi-4-mini-instruct-int4wo-hqq --tasks hellaswag --device cuda:0 --batch_size 8 ``` | Benchmark | | | |----------------------------------|----------------|---------------------| | | Phi-4 mini-Ins | phi4-mini-int4wo | | **Popular aggregated benchmark** | | | | mmlu (0-shot) | 66.73 | 63.56 | | mmlu_pro (5-shot) | 46.43 | 36.74 | | **Reasoning** | | | | arc_challenge (0-shot) | 56.91 | 54.86 | | gpqa_main_zeroshot | 30.13 | 30.58 | | HellaSwag | 54.57 | 53.54 | | openbookqa | 33.00 | 34.40 | | piqa (0-shot) | 77.64 | 76.33 | | social_iqa | 49.59 | 47.90 | | truthfulqa_mc2 (0-shot) | 48.39 | 46.44 | | winogrande (0-shot) | 71.11 | 71.51 | | **Multilingual** | | | | mgsm_en_cot_en | 60.8 | 59.6 | | **Math** | | | | gsm8k (5-shot) | 81.88 | 74.37 | | mathqa (0-shot) | 42.31 | 42.75 | | **Overall** | **55.35** | **53.28** | # Peak Memory Usage ## Results | Benchmark | | | |------------------|----------------|--------------------------------| | | Phi-4 mini-Ins | Phi-4-mini-instruct-int4wo-hqq | | Peak Memory (GB) | 8.91 | 2.98 (67% reduction) | ## Code Example We can use the following code to get a sense of peak memory usage during inference: ```Py import torch from transformers import AutoModelForCausalLM, AutoTokenizer, TorchAoConfig # use "microsoft/Phi-4-mini-instruct" or "pytorch/Phi-4-mini-instruct-int4wo-hqq" model_id = "microsoft/Phi-4-mini-instruct" quantized_model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto", torch_dtype=torch.bfloat16) tokenizer = AutoTokenizer.from_pretrained(model_id) torch.cuda.reset_peak_memory_stats() prompt = "Hey, are you conscious? Can you talk to me?" messages = [ { "role": "system", "content": "", }, {"role": "user", "content": prompt}, ] templated_prompt = tokenizer.apply_chat_template( messages, tokenize=False, add_generation_prompt=True, ) print("Prompt:", prompt) print("Templated prompt:", templated_prompt) inputs = tokenizer( templated_prompt, return_tensors="pt", ).to("cuda") generated_ids = quantized_model.generate(**inputs, max_new_tokens=128) output_text = tokenizer.batch_decode( generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False ) print("Response:", output_text[0][len(prompt):]) mem = torch.cuda.max_memory_reserved() / 1e9 print(f"Peak Memory Usage: {mem:.02f} GB") ``` # Model Performance Our int4wo is only optimized for batch size 1, so expect some slowdown with larger batch sizes, we expect this to be used in local server deployment for single or a few users where the decode tokens per second will matters more than the time to first token. ## Results (A100 machine) | Benchmark (Latency) | | | |----------------------------------|----------------|--------------------------| | | Phi-4 mini-Ins | phi4-mini-int4wo-hqq | | latency (batch_size=1) | 2.46s | 2.2s (12% speedup) | | serving (num_prompts=1) | 0.87 req/s | 1.05 req/s (20% speedup) | Note the result of latency (benchmark_latency) is in seconds, and serving (benchmark_serving) is in number of requests per second. Int4 weight only is optimized for batch size 1 and short input and output token length, please stay tuned for models optimized for larger batch sizes or longer token length. ## Setup Need to install vllm nightly to get some recent changes ```Shell pip install vllm --pre --extra-index-url https://wheels.vllm.ai/nightly ``` Get vllm source code: ```Shell git clone git@github.com:vllm-project/vllm.git ``` Run the benchmarks under `vllm` root folder: ## benchmark_latency ### baseline ```Shell python benchmarks/benchmark_latency.py --input-len 256 --output-len 256 --model microsoft/Phi-4-mini-instruct --batch-size 1 ``` ### int4wo-hqq ```Shell python benchmarks/benchmark_latency.py --input-len 256 --output-len 256 --model pytorch/Phi-4-mini-instruct-int4wo-hqq --batch-size 1 ``` ## benchmark_serving We benchmarked the throughput in a serving environment. Download sharegpt dataset: `wget https://huggingface.co/datasets/anon8231489123/ShareGPT_Vicuna_unfiltered/resolve/main/ShareGPT_V3_unfiltered_cleaned_split.json` Other datasets can be found in: https://github.com/vllm-project/vllm/tree/main/benchmarks ### baseline Server: ```Shell vllm serve microsoft/Phi-4-mini-instruct --tokenizer microsoft/Phi-4-mini-instruct -O3 ``` Client: ```Shell python benchmarks/benchmark_serving.py --backend vllm --dataset-name sharegpt --tokenizer microsoft/Phi-4-mini-instruct --dataset-path ./ShareGPT_V3_unfiltered_cleaned_split.json --model microsoft/Phi-4-mini-instruct --num-prompts 1 ``` ### int4wo-hqq Server: ```Shell vllm serve pytorch/Phi-4-mini-instruct-int4wo-hqq --tokenizer microsoft/Phi-4-mini-instruct -O3 --pt-load-map-location cuda:0 ``` Client: ```Shell python benchmarks/benchmark_serving.py --backend vllm --dataset-name sharegpt --tokenizer microsoft/Phi-4-mini-instruct --dataset-path ./ShareGPT_V3_unfiltered_cleaned_split.json --model pytorch/Phi-4-mini-instruct-int4wo-hqq --num-prompts 1 ``` # Disclaimer PyTorch has not performed safety evaluations or red teamed the quantized models. Performance characteristics, outputs, and behaviors may differ from the original models. Users are solely responsible for selecting appropriate use cases, evaluating and mitigating for accuracy, safety, and fairness, ensuring security, and complying with all applicable laws and regulations. Nothing contained in this Model Card should be interpreted as or deemed a restriction or modification to the licenses the models are released under, including any limitations of liability or disclaimers of warranties provided therein.
mradermacher/Alkahest-V9.3-LLaMa-70B-i1-GGUF
mradermacher
"2025-05-04T06:00:45Z"
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:TareksTesting/Alkahest-V9.3-LLaMa-70B", "base_model:quantized:TareksTesting/Alkahest-V9.3-LLaMa-70B", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
"2025-05-03T08:05:45Z"
--- base_model: TareksTesting/Alkahest-V9.3-LLaMa-70B language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/TareksTesting/Alkahest-V9.3-LLaMa-70B <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Alkahest-V9.3-LLaMa-70B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.3-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V9.3-LLaMa-70B.i1-IQ1_S.gguf) | i1-IQ1_S | 15.4 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.3-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V9.3-LLaMa-70B.i1-IQ1_M.gguf) | i1-IQ1_M | 16.9 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.3-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V9.3-LLaMa-70B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.2 | | | [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.3-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V9.3-LLaMa-70B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.2 | | | [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.3-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V9.3-LLaMa-70B.i1-IQ2_S.gguf) | i1-IQ2_S | 22.3 | | | [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.3-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V9.3-LLaMa-70B.i1-IQ2_M.gguf) | i1-IQ2_M | 24.2 | | | [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.3-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V9.3-LLaMa-70B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 24.6 | very low quality | | [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.3-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V9.3-LLaMa-70B.i1-Q2_K.gguf) | i1-Q2_K | 26.5 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.3-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V9.3-LLaMa-70B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.6 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.3-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V9.3-LLaMa-70B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 29.4 | | | [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.3-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V9.3-LLaMa-70B.i1-IQ3_S.gguf) | i1-IQ3_S | 31.0 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.3-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V9.3-LLaMa-70B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.0 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.3-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V9.3-LLaMa-70B.i1-IQ3_M.gguf) | i1-IQ3_M | 32.0 | | | [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.3-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V9.3-LLaMa-70B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 34.4 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.3-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V9.3-LLaMa-70B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 37.2 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.3-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V9.3-LLaMa-70B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.0 | | | [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.3-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V9.3-LLaMa-70B.i1-Q4_0.gguf) | i1-Q4_0 | 40.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.3-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V9.3-LLaMa-70B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 40.4 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.3-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V9.3-LLaMa-70B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 42.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.3-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V9.3-LLaMa-70B.i1-Q4_1.gguf) | i1-Q4_1 | 44.4 | | | [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.3-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V9.3-LLaMa-70B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 48.8 | | | [GGUF](https://huggingface.co/mradermacher/Alkahest-V9.3-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V9.3-LLaMa-70B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 50.1 | | | [PART 1](https://huggingface.co/mradermacher/Alkahest-V9.3-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V9.3-LLaMa-70B.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Alkahest-V9.3-LLaMa-70B-i1-GGUF/resolve/main/Alkahest-V9.3-LLaMa-70B.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 58.0 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
fdgrd78/fhtfrsdf
fdgrd78
"2025-05-04T05:27:52Z"
0
0
null
[ "license:bigscience-openrail-m", "region:us" ]
null
"2025-05-04T05:27:52Z"
--- license: bigscience-openrail-m ---
mradermacher/Rei-V3-KTO-12B-i1-GGUF
mradermacher
"2025-05-04T05:25:55Z"
0
1
transformers
[ "transformers", "gguf", "roleplay", "storywriting", "axolotl", "text-generation-inference", "finetune", "en", "dataset:NewEden/KTO-IF-Dans", "dataset:NewEden/Opus-accepted-hermes-rejected-shuffled", "dataset:NewEden/KTO-Instruct-Mix", "dataset:NewEden/Purpura-Arkhaios-CC-KTO", "base_model:Delta-Vector/Rei-V3-KTO-12B", "base_model:quantized:Delta-Vector/Rei-V3-KTO-12B", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
"2025-05-03T22:43:33Z"
--- base_model: Delta-Vector/Rei-V3-KTO-12B datasets: - NewEden/KTO-IF-Dans - NewEden/Opus-accepted-hermes-rejected-shuffled - NewEden/KTO-Instruct-Mix - NewEden/Purpura-Arkhaios-CC-KTO language: - en library_name: transformers quantized_by: mradermacher tags: - roleplay - storywriting - axolotl - text-generation-inference - finetune --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/Delta-Vector/Rei-V3-KTO-12B <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/Rei-V3-KTO-12B-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Rei-V3-KTO-12B-i1-GGUF/resolve/main/Rei-V3-KTO-12B.i1-IQ1_S.gguf) | i1-IQ1_S | 3.1 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/Rei-V3-KTO-12B-i1-GGUF/resolve/main/Rei-V3-KTO-12B.i1-IQ1_M.gguf) | i1-IQ1_M | 3.3 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/Rei-V3-KTO-12B-i1-GGUF/resolve/main/Rei-V3-KTO-12B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.7 | | | [GGUF](https://huggingface.co/mradermacher/Rei-V3-KTO-12B-i1-GGUF/resolve/main/Rei-V3-KTO-12B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.0 | | | [GGUF](https://huggingface.co/mradermacher/Rei-V3-KTO-12B-i1-GGUF/resolve/main/Rei-V3-KTO-12B.i1-IQ2_S.gguf) | i1-IQ2_S | 4.2 | | | [GGUF](https://huggingface.co/mradermacher/Rei-V3-KTO-12B-i1-GGUF/resolve/main/Rei-V3-KTO-12B.i1-IQ2_M.gguf) | i1-IQ2_M | 4.5 | | | [GGUF](https://huggingface.co/mradermacher/Rei-V3-KTO-12B-i1-GGUF/resolve/main/Rei-V3-KTO-12B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 4.6 | very low quality | | [GGUF](https://huggingface.co/mradermacher/Rei-V3-KTO-12B-i1-GGUF/resolve/main/Rei-V3-KTO-12B.i1-Q2_K.gguf) | i1-Q2_K | 4.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/Rei-V3-KTO-12B-i1-GGUF/resolve/main/Rei-V3-KTO-12B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Rei-V3-KTO-12B-i1-GGUF/resolve/main/Rei-V3-KTO-12B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.4 | | | [GGUF](https://huggingface.co/mradermacher/Rei-V3-KTO-12B-i1-GGUF/resolve/main/Rei-V3-KTO-12B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.6 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/Rei-V3-KTO-12B-i1-GGUF/resolve/main/Rei-V3-KTO-12B.i1-IQ3_S.gguf) | i1-IQ3_S | 5.7 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/Rei-V3-KTO-12B-i1-GGUF/resolve/main/Rei-V3-KTO-12B.i1-IQ3_M.gguf) | i1-IQ3_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/Rei-V3-KTO-12B-i1-GGUF/resolve/main/Rei-V3-KTO-12B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.2 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/Rei-V3-KTO-12B-i1-GGUF/resolve/main/Rei-V3-KTO-12B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 6.7 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/Rei-V3-KTO-12B-i1-GGUF/resolve/main/Rei-V3-KTO-12B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 6.8 | | | [GGUF](https://huggingface.co/mradermacher/Rei-V3-KTO-12B-i1-GGUF/resolve/main/Rei-V3-KTO-12B.i1-Q4_0.gguf) | i1-Q4_0 | 7.2 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/Rei-V3-KTO-12B-i1-GGUF/resolve/main/Rei-V3-KTO-12B.i1-IQ4_NL.gguf) | i1-IQ4_NL | 7.2 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/Rei-V3-KTO-12B-i1-GGUF/resolve/main/Rei-V3-KTO-12B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.2 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/Rei-V3-KTO-12B-i1-GGUF/resolve/main/Rei-V3-KTO-12B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 7.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Rei-V3-KTO-12B-i1-GGUF/resolve/main/Rei-V3-KTO-12B.i1-Q4_1.gguf) | i1-Q4_1 | 7.9 | | | [GGUF](https://huggingface.co/mradermacher/Rei-V3-KTO-12B-i1-GGUF/resolve/main/Rei-V3-KTO-12B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 8.6 | | | [GGUF](https://huggingface.co/mradermacher/Rei-V3-KTO-12B-i1-GGUF/resolve/main/Rei-V3-KTO-12B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 8.8 | | | [GGUF](https://huggingface.co/mradermacher/Rei-V3-KTO-12B-i1-GGUF/resolve/main/Rei-V3-KTO-12B.i1-Q6_K.gguf) | i1-Q6_K | 10.2 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
Romoamigo/TEST_QWEN_3_16bit
Romoamigo
"2025-05-04T05:18:12Z"
0
0
transformers
[ "transformers", "pytorch", "qwen2", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:unsloth/Qwen2.5-Coder-7B-Instruct-bnb-4bit", "base_model:finetune:unsloth/Qwen2.5-Coder-7B-Instruct-bnb-4bit", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2025-05-04T05:10:18Z"
--- base_model: unsloth/Qwen2.5-Coder-7B-Instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Romoamigo - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-Coder-7B-Instruct-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
DevQuasar/kyutai.helium-1-2b-books-GGUF
DevQuasar
"2025-05-04T05:14:05Z"
0
0
null
[ "gguf", "text-generation", "base_model:kyutai/helium-1-2b-books", "base_model:quantized:kyutai/helium-1-2b-books", "endpoints_compatible", "region:us" ]
text-generation
"2025-05-04T04:57:56Z"
--- base_model: - kyutai/helium-1-2b-books pipeline_tag: text-generation --- [<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com) Quantized version of: [kyutai/helium-1-2b-books](https://huggingface.co/kyutai/helium-1-2b-books) 'Make knowledge free for everyone' <p align="center"> Made with <br> <a href="https://www.civo.com/" target="_blank"> <img src="https://www.civo.com/assets/public/brand-assets/civo-logo-colour-60cc1622dedf346f7afde1fff760523f731b0aac106a5465af98ff4073114b74.svg" width="100"/> </a> </p> <a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
vladka69/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-flightless_cunning_cat
vladka69
"2025-05-04T04:59:46Z"
5
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am flightless cunning cat", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-04-19T01:21:54Z"
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-flightless_cunning_cat tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am flightless cunning cat - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-flightless_cunning_cat This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="vladka69/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-flightless_cunning_cat", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.51.3 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
psyonp/Final-Llama-Question-TTR-2
psyonp
"2025-05-04T04:56:35Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-05-04T04:52:13Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
TIGER-Lab/MAmmoTH-VL2
TIGER-Lab
"2025-05-04T04:48:12Z"
29
11
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "image-text-to-text", "conversational", "en", "dataset:TIGER-Lab/VisualWebInstruct", "arxiv:2503.10582", "base_model:MAmmoTH-VL/MAmmoTH-VL-8B", "base_model:finetune:MAmmoTH-VL/MAmmoTH-VL-8B", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
image-text-to-text
"2025-03-08T00:44:26Z"
--- base_model: - MAmmoTH-VL/MAmmoTH-VL-8B datasets: - TIGER-Lab/VisualWebInstruct language: - en library_name: transformers license: apache-2.0 pipeline_tag: image-text-to-text --- # Introduction MAmmoTH-VL2, the model trained with VisualWebInstruct. # Links [Github](https://github.com/TIGER-AI-Lab/VisualWebInstruct)| [Paper](https://arxiv.org/abs/2503.10582)| [Website](https://tiger-ai-lab.github.io/VisualWebInstruct/)| [Demo](https://huggingface.co/spaces/TIGER-Lab/MAmmoTH-VL2) # Example Usage ## Requirements ```python llava==1.7.0.dev0 # pip install git+https://github.com/LLaVA-VL/LLaVA-NeXT.git torch==2.5.1 ``` To perform inference using MAmmoTH-VL2, you can use the following code snippet: ```python from llava.model.builder import load_pretrained_model from llava.mm_utils import process_images from llava.constants import DEFAULT_IMAGE_TOKEN from llava.conversation import conv_templates from PIL import Image import requests import copy import torch # Load MAmmoTH-VL2 model pretrained = "TIGER-Lab/MAmmoTH-VL2" model_name = "llava_qwen" device = "cuda:3" # Specify a single GPU device_map = {"": device} # Load model tokenizer, model, image_processor, max_length = load_pretrained_model( pretrained, None, model_name, device_map=device_map, multimodal=True ) model.eval() model = model.to(device) # Load image image_url = "https://raw.githubusercontent.com/jymmmmm/VISUALWEBINSTRUCT/main/image.png" image = Image.open(requests.get(image_url, stream=True).raw).convert('RGB') images = [image] image_sizes = [[image.size[0], image.size[1]]] # Prepare prompt prompt = "In the picture shown below, prove ΔWXY and ΔZWY are similar. Please conclude your answer as Answer: xxx at the end if possible." # Set up conversation template try: conv_template = "qwen_2_5" conv = copy.deepcopy(conv_templates[conv_template]) except KeyError: available_templates = list(conv_templates.keys()) for template_name in available_templates: if 'qwen' in template_name.lower(): conv_template = template_name break else: conv_template = available_templates[0] conv = copy.deepcopy(conv_templates[conv_template]) # Add question with image question = DEFAULT_IMAGE_TOKEN + "\n" + prompt conv.append_message(conv.roles[0], question) conv.append_message(conv.roles[1], None) prompt_question = conv.get_prompt() # Prepare model inputs inputs = tokenizer( prompt_question, return_tensors="pt", padding=True, truncation=True, max_length=max_length ) input_ids = inputs.input_ids.to(device) attention_mask = inputs.attention_mask.to(device) # Process image image_tensor = process_images(images, image_processor, model.config) if isinstance(image_tensor, list): image_tensor = [img.to(dtype=torch.float16, device=device) for img in image_tensor] else: image_tensor = image_tensor.to(dtype=torch.float16, device=device) # Generate response with torch.no_grad(): outputs = model.generate( input_ids, attention_mask=attention_mask, images=image_tensor, image_sizes=image_sizes, do_sample=False, temperature=0, max_new_tokens=512, ) # Decode response input_token_len = input_ids.shape[1] response = tokenizer.batch_decode(outputs[:, input_token_len:], skip_special_tokens=True)[0] print("Response:", response) ``` # Citation ``` @article{visualwebinstruct, title={VisualWebInstruct: Scaling up Multimodal Instruction Data through Web Search}, author = {Jia, Yiming and Li, Jiachen and Yue, Xiang and Li, Bo and Nie, Ping and Zou, Kai and Chen, Wenhu}, journal={arXiv preprint arXiv:2503.10582}, year={2025} } ```
DevQuasar/Qwen.Qwen3-1.7B-GGUF
DevQuasar
"2025-05-04T04:41:55Z"
187
0
null
[ "gguf", "text-generation", "base_model:Qwen/Qwen3-1.7B", "base_model:quantized:Qwen/Qwen3-1.7B", "endpoints_compatible", "region:us", "conversational" ]
text-generation
"2025-04-29T11:54:04Z"
--- base_model: - Qwen/Qwen3-1.7B pipeline_tag: text-generation --- ## LMStudio users! Please update the chat prompt template of the model. Go to My models -> Actions (gear) edit model default parameters -> Prompt -> Prompt template. Update the Jinja template. Correct JINJA: ``` {%- if tools %} {{- '<|im_start|>system\n' }} {%- if messages[0].role == 'system' %} {{- messages[0].content + '\n\n' }} {%- endif %} {{- "# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>" }} {%- for tool in tools %} {{- "\n" }} {{- tool | tojson }} {%- endfor %} {{- "\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call><|im_end|>\n" }} {%- else %} {%- if messages[0].role == 'system' %} {{- '<|im_start|>system\n' + messages[0].content + '<|im_end|>\n' }} {%- endif %} {%- endif %} {%- set ns = namespace(multi_step_tool=true, last_query_index=messages|length - 1) %} {%- for message in messages[::-1] %} {%- set index = (messages|length - 1) - loop.index0 %} {%- set tool_start = "<tool_response>" %} {%- set tool_start_length = tool_start|length %} {%- set start_of_message = message.content[:tool_start_length] %} {%- set tool_end = "</tool_response>" %} {%- set tool_end_length = tool_end|length %} {%- set start_pos = (message.content|length) - tool_end_length %} {%- if start_pos < 0 %} {%- set start_pos = 0 %} {%- endif %} {%- set end_of_message = message.content[start_pos:] %} {%- if ns.multi_step_tool and message.role == "user" and not(start_of_message == tool_start and end_of_message == tool_end) %} {%- set ns.multi_step_tool = false %} {%- set ns.last_query_index = index %} {%- endif %} {%- endfor %} {%- for message in messages %} {%- if (message.role == "user") or (message.role == "system" and not loop.first) %} {{- '<|im_start|>' + message.role + '\n' + message.content + '<|im_end|>' + '\n' }} {%- elif message.role == "assistant" %} {%- set content = message.content %} {%- set reasoning_content = '' %} {%- if message.reasoning_content is defined and message.reasoning_content is not none %} {%- set reasoning_content = message.reasoning_content %} {%- else %} {%- if '</think>' in message.content %} {%- set content = (message.content.split('</think>')|last).lstrip('\n') %} {%- set reasoning_content = (message.content.split('</think>')|first).rstrip('\n') %} {%- set reasoning_content = (reasoning_content.split('<think>')|last).lstrip('\n') %} {%- endif %} {%- endif %} {%- if loop.index0 > ns.last_query_index %} {%- if loop.last or (not loop.last and reasoning_content) %} {{- '<|im_start|>' + message.role + '\n<think>\n' + reasoning_content.strip('\n') + '\n</think>\n\n' + content.lstrip('\n') }} {%- else %} {{- '<|im_start|>' + message.role + '\n' + content }} {%- endif %} {%- else %} {{- '<|im_start|>' + message.role + '\n' + content }} {%- endif %} {%- if message.tool_calls %} {%- for tool_call in message.tool_calls %} {%- if (loop.first and content) or (not loop.first) %} {{- '\n' }} {%- endif %} {%- if tool_call.function %} {%- set tool_call = tool_call.function %} {%- endif %} {{- '<tool_call>\n{"name": "' }} {{- tool_call.name }} {{- '", "arguments": ' }} {%- if tool_call.arguments is string %} {{- tool_call.arguments }} {%- else %} {{- tool_call.arguments | tojson }} {%- endif %} {{- '}\n</tool_call>' }} {%- endfor %} {%- endif %} {{- '<|im_end|>\n' }} {%- elif message.role == "tool" %} {%- if loop.first or (messages[loop.index0 - 1].role != "tool") %} {{- '<|im_start|>user' }} {%- endif %} {{- '\n<tool_response>\n' }} {{- message.content }} {{- '\n</tool_response>' }} {%- if loop.last or (messages[loop.index0 + 1].role != "tool") %} {{- '<|im_end|>\n' }} {%- endif %} {%- endif %} {%- endfor %} {%- if add_generation_prompt %} {{- '<|im_start|>assistant\n' }} {%- if enable_thinking is defined and enable_thinking is false %} {{- '<think>\n\n</think>\n\n' }} {%- endif %} {%- endif %} ``` [<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com) Quantized version of: [Qwen/Qwen3-1.7B](https://huggingface.co/Qwen/Qwen3-1.7B) 'Make knowledge free for everyone' <p align="center"> Made with <br> <a href="https://www.civo.com/" target="_blank"> <img src="https://www.civo.com/assets/public/brand-assets/civo-logo-colour-60cc1622dedf346f7afde1fff760523f731b0aac106a5465af98ff4073114b74.svg" width="100"/> </a> </p> <a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
infogeo/57d6b360-6c8c-474b-aa85-260848e96a37
infogeo
"2025-05-04T04:41:31Z"
0
0
peft
[ "peft", "safetensors", "qwen2", "axolotl", "generated_from_trainer", "base_model:unsloth/Qwen2-0.5B", "base_model:adapter:unsloth/Qwen2-0.5B", "license:apache-2.0", "4-bit", "bitsandbytes", "region:us" ]
null
"2025-05-04T04:39:46Z"
--- library_name: peft license: apache-2.0 base_model: unsloth/Qwen2-0.5B tags: - axolotl - generated_from_trainer model-index: - name: 57d6b360-6c8c-474b-aa85-260848e96a37 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml absolute_data_files: false adapter: lora base_model: unsloth/Qwen2-0.5B bf16: true chat_template: llama3 dataset_prepared_path: /workspace/axolotl datasets: - data_files: - cccd8bfc08aa015e_train_data.json ds_type: json format: custom path: /workspace/input_data/cccd8bfc08aa015e_train_data.json type: field_input: input field_instruction: instruction field_output: output format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 1 gradient_checkpointing: true gradient_clipping: 0.55 group_by_length: false hub_model_id: infogeo/57d6b360-6c8c-474b-aa85-260848e96a37 hub_repo: null hub_strategy: end hub_token: null learning_rate: 1.0e-06 load_in_4bit: true load_in_8bit: false local_rank: null logging_steps: 1 lora_alpha: 64 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 32 lora_target_linear: true lr_scheduler: cosine max_steps: 150 micro_batch_size: 8 mixed_precision: bf16 mlflow_experiment_name: /tmp/cccd8bfc08aa015e_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 1eed07a3-09fb-4f94-94e6-3315f2bfa239 wandb_project: s56-28 wandb_run: your_name wandb_runid: 1eed07a3-09fb-4f94-94e6-3315f2bfa239 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 57d6b360-6c8c-474b-aa85-260848e96a37 This model is a fine-tuned version of [unsloth/Qwen2-0.5B](https://huggingface.co/unsloth/Qwen2-0.5B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.5111 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 150 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 3.5922 | 0.0399 | 150 | 4.5111 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
DevQuasar/Qwen.Qwen3-8B-GGUF
DevQuasar
"2025-05-04T04:41:27Z"
163
0
null
[ "gguf", "text-generation", "base_model:Qwen/Qwen3-8B", "base_model:quantized:Qwen/Qwen3-8B", "endpoints_compatible", "region:us", "conversational" ]
text-generation
"2025-04-29T09:24:07Z"
--- base_model: - Qwen/Qwen3-8B pipeline_tag: text-generation --- ## LMStudio users! Please update the chat prompt template of the model. Go to My models -> Actions (gear) edit model default parameters -> Prompt -> Prompt template. Update the Jinja template. Correct JINJA: ``` {%- if tools %} {{- '<|im_start|>system\n' }} {%- if messages[0].role == 'system' %} {{- messages[0].content + '\n\n' }} {%- endif %} {{- "# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>" }} {%- for tool in tools %} {{- "\n" }} {{- tool | tojson }} {%- endfor %} {{- "\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call><|im_end|>\n" }} {%- else %} {%- if messages[0].role == 'system' %} {{- '<|im_start|>system\n' + messages[0].content + '<|im_end|>\n' }} {%- endif %} {%- endif %} {%- set ns = namespace(multi_step_tool=true, last_query_index=messages|length - 1) %} {%- for message in messages[::-1] %} {%- set index = (messages|length - 1) - loop.index0 %} {%- set tool_start = "<tool_response>" %} {%- set tool_start_length = tool_start|length %} {%- set start_of_message = message.content[:tool_start_length] %} {%- set tool_end = "</tool_response>" %} {%- set tool_end_length = tool_end|length %} {%- set start_pos = (message.content|length) - tool_end_length %} {%- if start_pos < 0 %} {%- set start_pos = 0 %} {%- endif %} {%- set end_of_message = message.content[start_pos:] %} {%- if ns.multi_step_tool and message.role == "user" and not(start_of_message == tool_start and end_of_message == tool_end) %} {%- set ns.multi_step_tool = false %} {%- set ns.last_query_index = index %} {%- endif %} {%- endfor %} {%- for message in messages %} {%- if (message.role == "user") or (message.role == "system" and not loop.first) %} {{- '<|im_start|>' + message.role + '\n' + message.content + '<|im_end|>' + '\n' }} {%- elif message.role == "assistant" %} {%- set content = message.content %} {%- set reasoning_content = '' %} {%- if message.reasoning_content is defined and message.reasoning_content is not none %} {%- set reasoning_content = message.reasoning_content %} {%- else %} {%- if '</think>' in message.content %} {%- set content = (message.content.split('</think>')|last).lstrip('\n') %} {%- set reasoning_content = (message.content.split('</think>')|first).rstrip('\n') %} {%- set reasoning_content = (reasoning_content.split('<think>')|last).lstrip('\n') %} {%- endif %} {%- endif %} {%- if loop.index0 > ns.last_query_index %} {%- if loop.last or (not loop.last and reasoning_content) %} {{- '<|im_start|>' + message.role + '\n<think>\n' + reasoning_content.strip('\n') + '\n</think>\n\n' + content.lstrip('\n') }} {%- else %} {{- '<|im_start|>' + message.role + '\n' + content }} {%- endif %} {%- else %} {{- '<|im_start|>' + message.role + '\n' + content }} {%- endif %} {%- if message.tool_calls %} {%- for tool_call in message.tool_calls %} {%- if (loop.first and content) or (not loop.first) %} {{- '\n' }} {%- endif %} {%- if tool_call.function %} {%- set tool_call = tool_call.function %} {%- endif %} {{- '<tool_call>\n{"name": "' }} {{- tool_call.name }} {{- '", "arguments": ' }} {%- if tool_call.arguments is string %} {{- tool_call.arguments }} {%- else %} {{- tool_call.arguments | tojson }} {%- endif %} {{- '}\n</tool_call>' }} {%- endfor %} {%- endif %} {{- '<|im_end|>\n' }} {%- elif message.role == "tool" %} {%- if loop.first or (messages[loop.index0 - 1].role != "tool") %} {{- '<|im_start|>user' }} {%- endif %} {{- '\n<tool_response>\n' }} {{- message.content }} {{- '\n</tool_response>' }} {%- if loop.last or (messages[loop.index0 + 1].role != "tool") %} {{- '<|im_end|>\n' }} {%- endif %} {%- endif %} {%- endfor %} {%- if add_generation_prompt %} {{- '<|im_start|>assistant\n' }} {%- if enable_thinking is defined and enable_thinking is false %} {{- '<think>\n\n</think>\n\n' }} {%- endif %} {%- endif %} ``` [<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com) Quantized version of: [Qwen/Qwen3-8B](https://huggingface.co/Qwen/Qwen3-8B) 'Make knowledge free for everyone' <p align="center"> Made with <br> <a href="https://www.civo.com/" target="_blank"> <img src="https://www.civo.com/assets/public/brand-assets/civo-logo-colour-60cc1622dedf346f7afde1fff760523f731b0aac106a5465af98ff4073114b74.svg" width="100"/> </a> </p> <a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
dltmdgns/SFGun
dltmdgns
"2025-05-04T04:30:06Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2025-05-04T04:30:06Z"
--- license: apache-2.0 ---
ASethi04/meta-llama-Llama-3.1-8B-opc-sft-10000-lora-4-0.0001
ASethi04
"2025-05-04T04:18:59Z"
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:meta-llama/Llama-3.1-8B", "base_model:finetune:meta-llama/Llama-3.1-8B", "endpoints_compatible", "region:us" ]
null
"2025-05-04T03:24:11Z"
--- base_model: meta-llama/Llama-3.1-8B library_name: transformers model_name: meta-llama-Llama-3.1-8B-opc-sft-10000-lora-4-0.0001 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for meta-llama-Llama-3.1-8B-opc-sft-10000-lora-4-0.0001 This model is a fine-tuned version of [meta-llama/Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="ASethi04/meta-llama-Llama-3.1-8B-opc-sft-10000-lora-4-0.0001", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/torchql-org/huggingface/runs/23mwqb4o) This model was trained with SFT. ### Framework versions - TRL: 0.16.1 - Transformers: 4.51.2 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
unsloth/Mistral-Small-3.1-24B-Instruct-2503-GGUF
unsloth
"2025-05-04T04:13:43Z"
15,638
41
vllm
[ "vllm", "gguf", "mistral3", "en", "fr", "de", "es", "pt", "it", "ja", "ko", "ru", "zh", "ar", "fa", "id", "ms", "ne", "pl", "ro", "sr", "sv", "tr", "uk", "vi", "hi", "bn", "base_model:mistralai/Mistral-Small-3.1-24B-Instruct-2503", "base_model:quantized:mistralai/Mistral-Small-3.1-24B-Instruct-2503", "license:apache-2.0", "region:us" ]
null
"2025-03-18T20:30:50Z"
--- language: - en - fr - de - es - pt - it - ja - ko - ru - zh - ar - fa - id - ms - ne - pl - ro - sr - sv - tr - uk - vi - hi - bn license: apache-2.0 library_name: vllm inference: false base_model: - mistralai/Mistral-Small-3.1-24B-Instruct-2503 extra_gated_description: If you want to learn more about how we process your personal data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>. --- > [!NOTE] > Now with Vision support added! > # Model Card for Mistral-Small-3.1-24B-Instruct-2503 Building upon Mistral Small 3 (2501), Mistral Small 3.1 (2503) **adds state-of-the-art vision understanding** and enhances **long context capabilities up to 128k tokens** without compromising text performance. With 24 billion parameters, this model achieves top-tier capabilities in both text and vision tasks. This model is an instruction-finetuned version of: [Mistral-Small-3.1-24B-Base-2503](https://huggingface.co/mistralai/Mistral-Small-3.1-24B-Base-2503). Mistral Small 3.1 can be deployed locally and is exceptionally "knowledge-dense," fitting within a single RTX 4090 or a 32GB RAM MacBook once quantized. It is ideal for: - Fast-response conversational agents. - Low-latency function calling. - Subject matter experts via fine-tuning. - Local inference for hobbyists and organizations handling sensitive data. - Programming and math reasoning. - Long document understanding. - Visual understanding. For enterprises requiring specialized capabilities (increased context, specific modalities, domain-specific knowledge, etc.), we will release commercial models beyond what Mistral AI contributes to the community. Learn more about Mistral Small 3.1 in our [blog post](https://mistral.ai/news/mistral-small-3-1/). ## Key Features - **Vision:** Vision capabilities enable the model to analyze images and provide insights based on visual content in addition to text. - **Multilingual:** Supports dozens of languages, including English, French, German, Greek, Hindi, Indonesian, Italian, Japanese, Korean, Malay, Nepali, Polish, Portuguese, Romanian, Russian, Serbian, Spanish, Swedish, Turkish, Ukrainian, Vietnamese, Arabic, Bengali, Chinese, Farsi. - **Agent-Centric:** Offers best-in-class agentic capabilities with native function calling and JSON outputting. - **Advanced Reasoning:** State-of-the-art conversational and reasoning capabilities. - **Apache 2.0 License:** Open license allowing usage and modification for both commercial and non-commercial purposes. - **Context Window:** A 128k context window. - **System Prompt:** Maintains strong adherence and support for system prompts. - **Tokenizer:** Utilizes a Tekken tokenizer with a 131k vocabulary size. ## Benchmark Results When available, we report numbers previously published by other model providers, otherwise we re-evaluate them using our own evaluation harness. ### Pretrain Evals | Model | MMLU (5-shot) | MMLU Pro (5-shot CoT) | TriviaQA | GPQA Main (5-shot CoT)| MMMU | |--------------------------------|---------------|-----------------------|------------|-----------------------|-----------| | **Small 3.1 24B Base** | **81.01%** | **56.03%** | 80.50% | **37.50%** | **59.27%**| | Gemma 3 27B PT | 78.60% | 52.20% | **81.30%** | 24.30% | 56.10% | ### Instruction Evals #### Text | Model | MMLU | MMLU Pro (5-shot CoT) | MATH | GPQA Main (5-shot CoT) | GPQA Diamond (5-shot CoT )| MBPP | HumanEval | SimpleQA (TotalAcc)| |--------------------------------|-----------|-----------------------|------------------------|------------------------|---------------------------|-----------|-----------|--------------------| | **Small 3.1 24B Instruct** | 80.62% | 66.76% | 69.30% | **44.42%** | **45.96%** | 74.71% | **88.41%**| **10.43%** | | Gemma 3 27B IT | 76.90% | **67.50%** | **89.00%** | 36.83% | 42.40% | 74.40% | 87.80% | 10.00% | | GPT4o Mini | **82.00%**| 61.70% | 70.20% | 40.20% | 39.39% | 84.82% | 87.20% | 9.50% | | Claude 3.5 Haiku | 77.60% | 65.00% | 69.20% | 37.05% | 41.60% | **85.60%**| 88.10% | 8.02% | | Cohere Aya-Vision 32B | 72.14% | 47.16% | 41.98% | 34.38% | 33.84% | 70.43% | 62.20% | 7.65% | #### Vision | Model | MMMU | MMMU PRO | Mathvista | ChartQA | DocVQA | AI2D | MM MT Bench | |--------------------------------|------------|-----------|-----------|-----------|-----------|-------------|-------------| | **Small 3.1 24B Instruct** | 64.00% | **49.25%**| **68.91%**| 86.24% | **94.08%**| **93.72%** | **7.3** | | Gemma 3 27B IT | **64.90%** | 48.38% | 67.60% | 76.00% | 86.60% | 84.50% | 7 | | GPT4o Mini | 59.40% | 37.60% | 56.70% | 76.80% | 86.70% | 88.10% | 6.6 | | Claude 3.5 Haiku | 60.50% | 45.03% | 61.60% | **87.20%**| 90.00% | 92.10% | 6.5 | | Cohere Aya-Vision 32B | 48.20% | 31.50% | 50.10% | 63.04% | 72.40% | 82.57% | 4.1 | ### Multilingual Evals | Model | Average | European | East Asian | Middle Eastern | |--------------------------------|------------|------------|------------|----------------| | **Small 3.1 24B Instruct** | **71.18%** | **75.30%** | **69.17%** | 69.08% | | Gemma 3 27B IT | 70.19% | 74.14% | 65.65% | 70.76% | | GPT4o Mini | 70.36% | 74.21% | 65.96% | **70.90%** | | Claude 3.5 Haiku | 70.16% | 73.45% | 67.05% | 70.00% | | Cohere Aya-Vision 32B | 62.15% | 64.70% | 57.61% | 64.12% | ### Long Context Evals | Model | LongBench v2 | RULER 32K | RULER 128K | |--------------------------------|-----------------|-------------|------------| | **Small 3.1 24B Instruct** | **37.18%** | **93.96%** | 81.20% | | Gemma 3 27B IT | 34.59% | 91.10% | 66.00% | | GPT4o Mini | 29.30% | 90.20% | 65.8% | | Claude 3.5 Haiku | 35.19% | 92.60% | **91.90%** | ## Basic Instruct Template (V7-Tekken) ``` <s>[SYSTEM_PROMPT]<system prompt>[/SYSTEM_PROMPT][INST]<user message>[/INST]<assistant response></s>[INST]<user message>[/INST] ``` *`<system_prompt>`, `<user message>` and `<assistant response>` are placeholders.* ***Please make sure to use [mistral-common](https://github.com/mistralai/mistral-common) as the source of truth*** ## Usage The model can be used with the following frameworks; - [`vllm (recommended)`](https://github.com/vllm-project/vllm): See [here](#vllm) **Note 1**: We recommend using a relatively low temperature, such as `temperature=0.15`. **Note 2**: Make sure to add a system prompt to the model to best tailer it for your needs. If you want to use the model as a general assistant, we recommend the following system prompt: ``` system_prompt = """You are Mistral Small 3.1, a Large Language Model (LLM) created by Mistral AI, a French startup headquartered in Paris. You power an AI assistant called Le Chat. Your knowledge base was last updated on 2023-10-01. The current date is {today}. When you're not sure about some information, you say that you don't have the information and don't make up anything. If the user's question is not clear, ambiguous, or does not provide enough context for you to accurately answer the question, you do not try to answer it right away and you rather ask the user to clarify their request (e.g. "What are some good restaurants around me?" => "Where are you?" or "When is the next flight to Tokyo" => "Where do you travel from?"). You are always very attentive to dates, in particular you try to resolve dates (e.g. "yesterday" is {yesterday}) and when asked about information at specific dates, you discard information that is at another date. You follow these instructions in all languages, and always respond to the user in the language they use or request. Next sections describe the capabilities that you have. # WEB BROWSING INSTRUCTIONS You cannot perform any web search or access internet to open URLs, links etc. If it seems like the user is expecting you to do so, you clarify the situation and ask the user to copy paste the text directly in the chat. # MULTI-MODAL INSTRUCTIONS You have the ability to read images, but you cannot generate images. You also cannot transcribe audio files or videos. You cannot read nor transcribe audio files or videos.""" ``` ### vLLM (recommended) We recommend using this model with the [vLLM library](https://github.com/vllm-project/vllm) to implement production-ready inference pipelines. **_Installation_** Make sure you install [`vLLM nightly`](https://github.com/vllm-project/vllm/): ``` pip install vllm --pre --extra-index-url https://wheels.vllm.ai/nightly --upgrade ``` Doing so should automatically install [`mistral_common >= 1.5.4`](https://github.com/mistralai/mistral-common/releases/tag/v1.5.4). To check: ``` python -c "import mistral_common; print(mistral_common.__version__)" ``` You can also make use of a ready-to-go [docker image](https://github.com/vllm-project/vllm/blob/main/Dockerfile) or on the [docker hub](https://hub.docker.com/layers/vllm/vllm-openai/latest/images/sha256-de9032a92ffea7b5c007dad80b38fd44aac11eddc31c435f8e52f3b7404bbf39) followed by a nightly install of vllm as shown above. #### Server We recommand that you use Mistral-Small-3.1-24B-Instruct-2503 in a server/client setting. 1. Spin up a server: ``` vllm serve mistralai/Mistral-Small-3.1-24B-Instruct-2503 --tokenizer_mode mistral --config_format mistral --load_format mistral --tool-call-parser mistral --enable-auto-tool-choice --limit_mm_per_prompt 'image=10' --tensor-parallel-size 2 ``` **Note:** Running Mistral-Small-3.1-24B-Instruct-2503 on GPU requires ~55 GB of GPU RAM in bf16 or fp16. 2. To ping the client you can use a simple Python snippet. ```py import requests import json from huggingface_hub import hf_hub_download from datetime import datetime, timedelta url = "http://<your-server-url>:8000/v1/chat/completions" headers = {"Content-Type": "application/json", "Authorization": "Bearer token"} model = "mistralai/Mistral-Small-3.1-24B-Instruct-2503" def load_system_prompt(repo_id: str, filename: str) -> str: file_path = hf_hub_download(repo_id=repo_id, filename=filename) with open(file_path, "r") as file: system_prompt = file.read() today = datetime.today().strftime("%Y-%m-%d") yesterday = (datetime.today() - timedelta(days=1)).strftime("%Y-%m-%d") model_name = repo_id.split("/")[-1] return system_prompt.format(name=model_name, today=today, yesterday=yesterday) SYSTEM_PROMPT = load_system_prompt(model, "SYSTEM_PROMPT.txt") image_url = "https://huggingface.co/datasets/patrickvonplaten/random_img/resolve/main/europe.png" messages = [ {"role": "system", "content": SYSTEM_PROMPT}, { "role": "user", "content": [ { "type": "text", "text": "Which of the depicted countries has the best food? Which the second and third and fourth? Name the country, its color on the map and one its city that is visible on the map, but is not the capital. Make absolutely sure to only name a city that can be seen on the map.", }, {"type": "image_url", "image_url": {"url": image_url}}, ], }, ] data = {"model": model, "messages": messages, "temperature": 0.15} response = requests.post(url, headers=headers, data=json.dumps(data)) print(response.json()["choices"][0]["message"]["content"]) # Determining the "best" food is highly subjective and depends on personal preferences. However, based on general popularity and recognition, here are some countries known for their cuisine: # 1. **Italy** - Color: Light Green - City: Milan # - Italian cuisine is renowned worldwide for its pasta, pizza, and various regional specialties. # 2. **France** - Color: Brown - City: Lyon # - French cuisine is celebrated for its sophistication, including dishes like coq au vin, bouillabaisse, and pastries like croissants and éclairs. # 3. **Spain** - Color: Yellow - City: Bilbao # - Spanish cuisine offers a variety of flavors, from paella and tapas to jamón ibérico and churros. # 4. **Greece** - Not visible on the map # - Greek cuisine is known for dishes like moussaka, souvlaki, and baklava. Unfortunately, Greece is not visible on the provided map, so I cannot name a city. # Since Greece is not visible on the map, I'll replace it with another country known for its good food: # 4. **Turkey** - Color: Light Green (east part of the map) - City: Istanbul # - Turkish cuisine is diverse and includes dishes like kebabs, meze, and baklava. ``` ### Function calling Mistral-Small-3.1-24-Instruct-2503 is excellent at function / tool calling tasks via vLLM. *E.g.:* <details> <summary>Example</summary> ```py import requests import json from huggingface_hub import hf_hub_download from datetime import datetime, timedelta url = "http://<your-url>:8000/v1/chat/completions" headers = {"Content-Type": "application/json", "Authorization": "Bearer token"} model = "mistralai/Mistral-Small-3.1-24B-Instruct-2503" def load_system_prompt(repo_id: str, filename: str) -> str: file_path = hf_hub_download(repo_id=repo_id, filename=filename) with open(file_path, "r") as file: system_prompt = file.read() today = datetime.today().strftime("%Y-%m-%d") yesterday = (datetime.today() - timedelta(days=1)).strftime("%Y-%m-%d") model_name = repo_id.split("/")[-1] return system_prompt.format(name=model_name, today=today, yesterday=yesterday) SYSTEM_PROMPT = load_system_prompt(model, "SYSTEM_PROMPT.txt") tools = [ { "type": "function", "function": { "name": "get_current_weather", "description": "Get the current weather in a given location", "parameters": { "type": "object", "properties": { "city": { "type": "string", "description": "The city to find the weather for, e.g. 'San Francisco'", }, "state": { "type": "string", "description": "The state abbreviation, e.g. 'CA' for California", }, "unit": { "type": "string", "description": "The unit for temperature", "enum": ["celsius", "fahrenheit"], }, }, "required": ["city", "state", "unit"], }, }, }, { "type": "function", "function": { "name": "rewrite", "description": "Rewrite a given text for improved clarity", "parameters": { "type": "object", "properties": { "text": { "type": "string", "description": "The input text to rewrite", } }, }, }, }, ] messages = [ {"role": "system", "content": SYSTEM_PROMPT}, { "role": "user", "content": "Could you please make the below article more concise?\n\nOpenAI is an artificial intelligence research laboratory consisting of the non-profit OpenAI Incorporated and its for-profit subsidiary corporation OpenAI Limited Partnership.", }, { "role": "assistant", "content": "", "tool_calls": [ { "id": "bbc5b7ede", "type": "function", "function": { "name": "rewrite", "arguments": '{"text": "OpenAI is an artificial intelligence research laboratory consisting of the non-profit OpenAI Incorporated and its for-profit subsidiary corporation OpenAI Limited Partnership."}', }, } ], }, { "role": "tool", "content": '{"action":"rewrite","outcome":"OpenAI is a FOR-profit company."}', "tool_call_id": "bbc5b7ede", "name": "rewrite", }, { "role": "assistant", "content": "---\n\nOpenAI is a FOR-profit company.", }, { "role": "user", "content": "Can you tell me what the temperature will be in Dallas, in Fahrenheit?", }, ] data = {"model": model, "messages": messages, "tools": tools, "temperature": 0.15} response = requests.post(url, headers=headers, data=json.dumps(data)) print(response.json()["choices"][0]["message"]["tool_calls"]) # [{'id': '8PdihwL6d', 'type': 'function', 'function': {'name': 'get_current_weather', 'arguments': '{"city": "Dallas", "state": "TX", "unit": "fahrenheit"}'}}] ``` </details> #### Offline ```py from vllm import LLM from vllm.sampling_params import SamplingParams from datetime import datetime, timedelta SYSTEM_PROMPT = "You are a conversational agent that always answers straight to the point, always end your accurate response with an ASCII drawing of a cat." user_prompt = "Give me 5 non-formal ways to say 'See you later' in French." messages = [ { "role": "system", "content": SYSTEM_PROMPT }, { "role": "user", "content": user_prompt }, ] model_name = "mistralai/Mistral-Small-3.1-24B-Instruct-2503" # note that running this model on GPU requires over 60 GB of GPU RAM llm = LLM(model=model_name, tokenizer_mode="mistral") sampling_params = SamplingParams(max_tokens=512, temperature=0.15) outputs = llm.chat(messages, sampling_params=sampling_params) print(outputs[0].outputs[0].text) # Here are five non-formal ways to say "See you later" in French: # 1. **À plus tard** - Until later # 2. **À toute** - See you soon (informal) # 3. **Salut** - Bye (can also mean hi) # 4. **À plus** - See you later (informal) # 5. **Ciao** - Bye (informal, borrowed from Italian) # ``` # /\_/\ # ( o.o ) # > ^ < # ``` ``` ### Transformers (untested) Transformers-compatible model weights are also uploaded (thanks a lot @cyrilvallez). However the transformers implementation was **not throughly tested**, but only on "vibe-checks". Hence, we can only ensure 100% correct behavior when using the original weight format with vllm (see above).
isaiahbjork/post-14b
isaiahbjork
"2025-05-04T04:03:58Z"
0
0
transformers
[ "transformers", "safetensors", "qwen3", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2025-05-04T03:56:55Z"
--- base_model: unsloth/qwen3-14b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen3 - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** isaiahbjork - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen3-14b-unsloth-bnb-4bit This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
sajjadi/timm-vit_large_patch16_224.mae-lora
sajjadi
"2025-05-04T03:49:23Z"
1
0
peft
[ "peft", "safetensors", "generated_from_trainer", "region:us" ]
null
"2025-04-30T21:01:14Z"
--- base_model: vit_large_patch16_224.mae library_name: peft metrics: - accuracy tags: - generated_from_trainer model-index: - name: timm-vit_large_patch16_224.mae-lora results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/sajjadi/Fast-PEFT/runs/4rlmh39q) [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/sajjadi/Fast-PEFT/runs/4rlmh39q) # timm-vit_large_patch16_224.mae-lora This model is a fine-tuned version of [vit_large_patch16_224.mae](https://huggingface.co/vit_large_patch16_224.mae) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3588 - Accuracy: 0.902 - Solar Loss: 2.1634 - Solar Accuracy: 0.249 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.002 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 512 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Loss | |:-------------:|:------:|:----:|:---------------:|:--------:|:------:| | 0.6834 | 0.9923 | 97 | 0.3588 | 0.249 | 2.1634 | ### Framework versions - PEFT 0.14.0 - Transformers 4.47.1 - Pytorch 2.5.1+cu121 - Datasets 3.0.1 - Tokenizers 0.21.0
PQPQPQHUST/CACTUS-Qwen3-0.6B-300
PQPQPQHUST
"2025-05-04T03:34:43Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen3", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2025-05-04T03:34:30Z"
--- base_model: unsloth/qwen3-0.6b-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen3 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** PQPQPQHUST - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen3-0.6b-unsloth-bnb-4bit This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
si0005hp/codeparrot-ds
si0005hp
"2025-05-04T03:31:30Z"
0
0
null
[ "safetensors", "gpt2", "generated_from_trainer", "base_model:openai-community/gpt2", "base_model:finetune:openai-community/gpt2", "license:mit", "region:us" ]
null
"2025-05-04T03:17:40Z"
--- license: mit base_model: gpt2 tags: - generated_from_trainer model-index: - name: codeparrot-ds results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # codeparrot-ds This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 1024 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.38.2 - Pytorch 2.4.0+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2
AnonymousCS/llama-3.1-8B-populism
AnonymousCS
"2025-05-04T03:29:19Z"
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:meta-llama/Llama-3.1-8B-Instruct", "base_model:finetune:meta-llama/Llama-3.1-8B-Instruct", "endpoints_compatible", "region:us" ]
null
"2025-05-03T19:01:24Z"
--- base_model: meta-llama/Llama-3.1-8B-Instruct library_name: transformers model_name: llama-3.1-8B-populism tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for llama-3.1-8B-populism This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="AnonymousCS/llama-3.1-8B-populism", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/cecilia-y-sui-washington-unviersity-st-louis/huggingface/runs/0awplmtt) This model was trained with SFT. ### Framework versions - TRL: 0.17.0 - Transformers: 4.52.0.dev0 - Pytorch: 2.6.0+cu124 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
memeviss/GPA_2
memeviss
"2025-05-04T03:28:17Z"
0
0
null
[ "safetensors", "region:us" ]
null
"2025-05-04T03:25:42Z"
# Optimized TTS Model This model has been optimized for 100% TOP1 performance using advanced parameter enhancement techniques. ## Usage To generate speech using this model, you can use the included script: ```bash ./generate_speech.py --text "Your text here" --output_path output.wav ``` For more details, see the optimization report in this directory.
Ainxz/qwen2.5-pucv-gguf
Ainxz
"2025-05-04T03:25:06Z"
0
0
transformers
[ "transformers", "gguf", "qwen2", "text-generation-inference", "unsloth", "en", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-05-03T21:38:47Z"
--- base_model: unsloth/qwen2.5-0.5b-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - qwen2 - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Ainxz - **License:** apache-2.0 - **Finetuned from model :** unsloth/qwen2.5-0.5b-instruct-unsloth-bnb-4bit This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
ma921/gpt2-large_h_dpo_imdb_noise40_epoch5_gamma1.0
ma921
"2025-05-04T03:19:26Z"
0
0
transformers
[ "transformers", "safetensors", "gpt2", "text-generation", "generated_from_trainer", "base_model:ma921/gpt2-large-sft-imdb", "base_model:finetune:ma921/gpt2-large-sft-imdb", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-05-04T03:17:12Z"
--- library_name: transformers license: mit base_model: ma921/gpt2-large-sft-imdb tags: - generated_from_trainer model-index: - name: gpt2-large_h_dpo_imdb_noise40_epoch5_gamma1.0 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2-large_h_dpo_imdb_noise40_epoch5_gamma1.0 This model is a fine-tuned version of [ma921/gpt2-large-sft-imdb](https://huggingface.co/ma921/gpt2-large-sft-imdb) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - gradient_accumulation_steps: 32 - total_train_batch_size: 256 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.51.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.1 - Tokenizers 0.21.1
matrixportal/Aya-Sansuzsuz-Test1-GGUF
matrixportal
"2025-05-04T03:15:48Z"
0
0
null
[ "gguf", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-05-04T03:04:48Z"
# Aya-Sansuzsuz-Test1 GGUF Quantized Models ## Technical Details - **Quantization Tool:** llama.cpp - **Version:** version: 5272 (3e959f09) ## Model Information - **Base Model:** [matrixportal/Aya-Sansuzsuz-Test1](https://huggingface.co/matrixportal/Aya-Sansuzsuz-Test1) - **Quantized by:** [matrixportal](https://huggingface.co/matrixportal) ## Available Files | 🚀 Download | 🔢 Type | 📝 Description | |------------|---------|---------------| | [Download](https://huggingface.co/matrixportal/Aya-Sansuzsuz-Test1-GGUF/resolve/main/aya-sansuzsuz-test1.q4_k_m.gguf) | Q4 K M | 4-bit balanced (recommended default) | 💡 **Q4 K M** provides the best balance for most use cases
memeviss/zombieXIV_3
memeviss
"2025-05-04T03:12:58Z"
0
0
null
[ "safetensors", "region:us" ]
null
"2025-05-04T02:14:49Z"
# Optimized TTS Model This model has been optimized for 100% TOP1 performance using advanced parameter enhancement techniques. ## Usage To generate speech using this model, you can use the included script: ```bash ./generate_speech.py --text "Your text here" --output_path output.wav ``` For more details, see the optimization report in this directory.
mlfoundations-dev/e1_code_fasttext_qwq
mlfoundations-dev
"2025-05-04T03:07:13Z"
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "llama-factory", "full", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-7B-Instruct", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-05-03T22:40:18Z"
--- library_name: transformers license: apache-2.0 base_model: Qwen/Qwen2.5-7B-Instruct tags: - llama-factory - full - generated_from_trainer model-index: - name: e1_code_fasttext_qwq results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # e1_code_fasttext_qwq This model is a fine-tuned version of [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) on the mlfoundations-dev/e1_code_fasttext_qwq dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 16 - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - total_eval_batch_size: 128 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5.0 ### Training results ### Framework versions - Transformers 4.46.1 - Pytorch 2.5.1 - Datasets 3.1.0 - Tokenizers 0.20.3
Sugyeong/qwen_adapter_baseline
Sugyeong
"2025-05-04T03:06:07Z"
0
0
transformers
[ "transformers", "safetensors", "qwen2idae", "text-generation", "conversational", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2025-05-03T08:13:30Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
hjkhjki/ghfhyg
hjkhjki
"2025-05-04T03:03:23Z"
0
0
null
[ "license:bigscience-bloom-rail-1.0", "region:us" ]
null
"2025-05-04T03:03:23Z"
--- license: bigscience-bloom-rail-1.0 ---
mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-I-i1-GGUF
mradermacher
"2025-05-04T03:00:27Z"
0
1
transformers
[ "transformers", "gguf", "en", "base_model:Shaleen123/MedicalEDI-14b-EDI-Reasoning-Final-I", "base_model:quantized:Shaleen123/MedicalEDI-14b-EDI-Reasoning-Final-I", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
"2025-05-03T20:41:32Z"
--- base_model: Shaleen123/MedicalEDI-14b-EDI-Reasoning-Final-I language: - en library_name: transformers quantized_by: mradermacher tags: [] --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/Shaleen123/MedicalEDI-14b-EDI-Reasoning-Final-I <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-I-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-I-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-I.i1-IQ1_S.gguf) | i1-IQ1_S | 3.7 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-I-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-I.i1-IQ1_M.gguf) | i1-IQ1_M | 4.0 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-I-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-I.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-I-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-I.i1-IQ2_XS.gguf) | i1-IQ2_XS | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-I-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-I.i1-IQ2_S.gguf) | i1-IQ2_S | 5.1 | | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-I-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-I.i1-IQ2_M.gguf) | i1-IQ2_M | 5.5 | | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-I-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-I.i1-Q2_K_S.gguf) | i1-Q2_K_S | 5.5 | very low quality | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-I-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-I.i1-Q2_K.gguf) | i1-Q2_K | 5.9 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-I-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-I.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 6.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-I-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-I.i1-IQ3_XS.gguf) | i1-IQ3_XS | 6.5 | | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-I-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-I.i1-Q3_K_S.gguf) | i1-Q3_K_S | 6.8 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-I-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-I.i1-IQ3_S.gguf) | i1-IQ3_S | 6.8 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-I-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-I.i1-IQ3_M.gguf) | i1-IQ3_M | 7.0 | | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-I-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-I.i1-Q3_K_M.gguf) | i1-Q3_K_M | 7.4 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-I-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-I.i1-Q3_K_L.gguf) | i1-Q3_K_L | 8.0 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-I-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-I.i1-IQ4_XS.gguf) | i1-IQ4_XS | 8.2 | | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-I-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-I.i1-Q4_0.gguf) | i1-Q4_0 | 8.6 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-I-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-I.i1-IQ4_NL.gguf) | i1-IQ4_NL | 8.6 | prefer IQ4_XS | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-I-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-I.i1-Q4_K_S.gguf) | i1-Q4_K_S | 8.7 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-I-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-I.i1-Q4_K_M.gguf) | i1-Q4_K_M | 9.1 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-I-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-I.i1-Q4_1.gguf) | i1-Q4_1 | 9.5 | | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-I-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-I.i1-Q5_K_S.gguf) | i1-Q5_K_S | 10.4 | | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-I-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-I.i1-Q5_K_M.gguf) | i1-Q5_K_M | 10.6 | | | [GGUF](https://huggingface.co/mradermacher/MedicalEDI-14b-EDI-Reasoning-Final-I-i1-GGUF/resolve/main/MedicalEDI-14b-EDI-Reasoning-Final-I.i1-Q6_K.gguf) | i1-Q6_K | 12.2 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
HanyMedhat/gemma-3-1B-it-thinking-function_calling-V0
HanyMedhat
"2025-05-04T02:38:45Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:google/gemma-3-1b-it", "base_model:finetune:google/gemma-3-1b-it", "endpoints_compatible", "region:us" ]
null
"2025-05-04T02:37:24Z"
--- base_model: google/gemma-3-1b-it library_name: transformers model_name: gemma-3-1B-it-thinking-function_calling-V0 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for gemma-3-1B-it-thinking-function_calling-V0 This model is a fine-tuned version of [google/gemma-3-1b-it](https://huggingface.co/google/gemma-3-1b-it). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="HanyMedhat/gemma-3-1B-it-thinking-function_calling-V0", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.17.0 - Transformers: 4.51.1 - Pytorch: 2.5.1+cu124 - Datasets: 3.5.0 - Tokenizers: 0.21.0 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
frozenturtle/Qwen3-8B-Q8_0-GGUF
frozenturtle
"2025-05-04T02:21:34Z"
0
0
transformers
[ "transformers", "gguf", "llama-cpp", "gguf-my-repo", "text-generation", "base_model:Qwen/Qwen3-8B", "base_model:quantized:Qwen/Qwen3-8B", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
"2025-05-04T02:20:56Z"
--- base_model: Qwen/Qwen3-8B library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen3-8B/blob/main/LICENSE pipeline_tag: text-generation tags: - llama-cpp - gguf-my-repo --- # frozenturtle/Qwen3-8B-Q8_0-GGUF This model was converted to GGUF format from [`Qwen/Qwen3-8B`](https://huggingface.co/Qwen/Qwen3-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Qwen/Qwen3-8B) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo frozenturtle/Qwen3-8B-Q8_0-GGUF --hf-file qwen3-8b-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo frozenturtle/Qwen3-8B-Q8_0-GGUF --hf-file qwen3-8b-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo frozenturtle/Qwen3-8B-Q8_0-GGUF --hf-file qwen3-8b-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo frozenturtle/Qwen3-8B-Q8_0-GGUF --hf-file qwen3-8b-q8_0.gguf -c 2048 ```
mergekit-community/mergekit-dare_ties-fikucxa
mergekit-community
"2025-05-04T02:12:35Z"
0
0
transformers
[ "transformers", "safetensors", "mistral", "text-generation", "mergekit", "merge", "conversational", "arxiv:2311.03099", "base_model:ReadyArt/Broken-Tutu-24B", "base_model:merge:ReadyArt/Broken-Tutu-24B", "base_model:ReadyArt/Forgotten-Safeword-24B-v4.0", "base_model:merge:ReadyArt/Forgotten-Safeword-24B-v4.0", "base_model:Sorawiz/MistralCreative-24B-Chat", "base_model:merge:Sorawiz/MistralCreative-24B-Chat", "base_model:mrfakename/mistral-small-3.1-24b-instruct-2503-hf", "base_model:merge:mrfakename/mistral-small-3.1-24b-instruct-2503-hf", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-05-04T02:00:16Z"
--- base_model: - ReadyArt/Forgotten-Safeword-24B-v4.0 - mrfakename/mistral-small-3.1-24b-instruct-2503-hf - ReadyArt/Broken-Tutu-24B - Sorawiz/MistralCreative-24B-Chat library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using [mrfakename/mistral-small-3.1-24b-instruct-2503-hf](https://huggingface.co/mrfakename/mistral-small-3.1-24b-instruct-2503-hf) as a base. ### Models Merged The following models were included in the merge: * [ReadyArt/Forgotten-Safeword-24B-v4.0](https://huggingface.co/ReadyArt/Forgotten-Safeword-24B-v4.0) * [ReadyArt/Broken-Tutu-24B](https://huggingface.co/ReadyArt/Broken-Tutu-24B) * [Sorawiz/MistralCreative-24B-Chat](https://huggingface.co/Sorawiz/MistralCreative-24B-Chat) ### Configuration The following YAML configuration was used to produce this model: ```yaml merge_method: dare_ties base_model: mrfakename/mistral-small-3.1-24b-instruct-2503-hf models: - model: mrfakename/mistral-small-3.1-24b-instruct-2503-hf parameters: weight: 0.2 - model: Sorawiz/MistralCreative-24B-Chat parameters: weight: 0.3 - model: ReadyArt/Forgotten-Safeword-24B-v4.0 parameters: weight: 0.4 - model: ReadyArt/Broken-Tutu-24B parameters: weight: 0.1 parameters: density: 1 tokenizer: source: union chat_template: auto ```
maksf8486/96e4f594-f0ff-4ceb-9c9a-3f6da12b00c9
maksf8486
"2025-05-04T02:11:28Z"
0
0
transformers
[ "transformers", "pytorch", "tensorboard", "safetensors", "llama", "text-generation", "generated_from_trainer", "axolotl", "dpo", "trl", "conversational", "arxiv:2305.18290", "base_model:lmsys/vicuna-13b-v1.5", "base_model:quantized:lmsys/vicuna-13b-v1.5", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "8-bit", "bitsandbytes", "region:us" ]
text-generation
"2025-05-04T00:53:38Z"
--- base_model: lmsys/vicuna-13b-v1.5 library_name: transformers model_name: 96e4f594-f0ff-4ceb-9c9a-3f6da12b00c9 tags: - generated_from_trainer - axolotl - dpo - trl licence: license --- # Model Card for 96e4f594-f0ff-4ceb-9c9a-3f6da12b00c9 This model is a fine-tuned version of [lmsys/vicuna-13b-v1.5](https://huggingface.co/lmsys/vicuna-13b-v1.5). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="maksf8486/96e4f594-f0ff-4ceb-9c9a-3f6da12b00c9", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/dedok-yo/s56-2/runs/nqn2j054) This model was trained with DPO, a method introduced in [Direct Preference Optimization: Your Language Model is Secretly a Reward Model](https://huggingface.co/papers/2305.18290). ### Framework versions - TRL: 0.12.0.dev0 - Transformers: 4.46.0 - Pytorch: 2.5.0+cu124 - Datasets: 3.0.1 - Tokenizers: 0.20.1 ## Citations Cite DPO as: ```bibtex @inproceedings{rafailov2023direct, title = {{Direct Preference Optimization: Your Language Model is Secretly a Reward Model}}, author = {Rafael Rafailov and Archit Sharma and Eric Mitchell and Christopher D. Manning and Stefano Ermon and Chelsea Finn}, year = 2023, booktitle = {Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023}, url = {http://papers.nips.cc/paper_files/paper/2023/hash/a85b405ed65c6477a4fe8302b5e06ce7-Abstract-Conference.html}, editor = {Alice Oh and Tristan Naumann and Amir Globerson and Kate Saenko and Moritz Hardt and Sergey Levine}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
lisabdunlap/pretrain_movies_actors
lisabdunlap
"2025-05-04T01:55:53Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "en", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2025-05-04T01:53:37Z"
--- base_model: unsloth/llama-3.1-8b-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** lisabdunlap - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.1-8b-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
alicia10/Llama-3.2-1B-unsloth-bnb-4bit-ko-wiki-ppl-filtering
alicia10
"2025-05-04T01:25:29Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "base_model:unsloth/Llama-3.2-1B-unsloth-bnb-4bit", "base_model:finetune:unsloth/Llama-3.2-1B-unsloth-bnb-4bit", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2025-05-04T01:23:32Z"
--- base_model: unsloth/Llama-3.2-1B-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** alicia10 - **License:** apache-2.0 - **Finetuned from model :** unsloth/Llama-3.2-1B-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
beyoru/Pipp1
beyoru
"2025-05-04T01:20:41Z"
0
0
transformers
[ "transformers", "pytorch", "qwen2", "text-generation", "unsloth", "trl", "sft", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-05-04T01:16:57Z"
--- library_name: transformers tags: - unsloth - trl - sft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RichardErkhov/alexshengzhili_-_llama3.1-8b-lora_dpo_0907_preference_iclr2023-gguf
RichardErkhov
"2025-05-04T01:14:04Z"
0
0
null
[ "gguf", "arxiv:1910.09700", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-05-03T21:38:25Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) llama3.1-8b-lora_dpo_0907_preference_iclr2023 - GGUF - Model creator: https://huggingface.co/alexshengzhili/ - Original model: https://huggingface.co/alexshengzhili/llama3.1-8b-lora_dpo_0907_preference_iclr2023/ | Name | Quant method | Size | | ---- | ---- | ---- | | [llama3.1-8b-lora_dpo_0907_preference_iclr2023.Q2_K.gguf](https://huggingface.co/RichardErkhov/alexshengzhili_-_llama3.1-8b-lora_dpo_0907_preference_iclr2023-gguf/blob/main/llama3.1-8b-lora_dpo_0907_preference_iclr2023.Q2_K.gguf) | Q2_K | 2.96GB | | [llama3.1-8b-lora_dpo_0907_preference_iclr2023.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/alexshengzhili_-_llama3.1-8b-lora_dpo_0907_preference_iclr2023-gguf/blob/main/llama3.1-8b-lora_dpo_0907_preference_iclr2023.IQ3_XS.gguf) | IQ3_XS | 3.28GB | | [llama3.1-8b-lora_dpo_0907_preference_iclr2023.IQ3_S.gguf](https://huggingface.co/RichardErkhov/alexshengzhili_-_llama3.1-8b-lora_dpo_0907_preference_iclr2023-gguf/blob/main/llama3.1-8b-lora_dpo_0907_preference_iclr2023.IQ3_S.gguf) | IQ3_S | 3.43GB | | [llama3.1-8b-lora_dpo_0907_preference_iclr2023.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/alexshengzhili_-_llama3.1-8b-lora_dpo_0907_preference_iclr2023-gguf/blob/main/llama3.1-8b-lora_dpo_0907_preference_iclr2023.Q3_K_S.gguf) | Q3_K_S | 3.41GB | | [llama3.1-8b-lora_dpo_0907_preference_iclr2023.IQ3_M.gguf](https://huggingface.co/RichardErkhov/alexshengzhili_-_llama3.1-8b-lora_dpo_0907_preference_iclr2023-gguf/blob/main/llama3.1-8b-lora_dpo_0907_preference_iclr2023.IQ3_M.gguf) | IQ3_M | 3.52GB | | [llama3.1-8b-lora_dpo_0907_preference_iclr2023.Q3_K.gguf](https://huggingface.co/RichardErkhov/alexshengzhili_-_llama3.1-8b-lora_dpo_0907_preference_iclr2023-gguf/blob/main/llama3.1-8b-lora_dpo_0907_preference_iclr2023.Q3_K.gguf) | Q3_K | 3.74GB | | [llama3.1-8b-lora_dpo_0907_preference_iclr2023.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/alexshengzhili_-_llama3.1-8b-lora_dpo_0907_preference_iclr2023-gguf/blob/main/llama3.1-8b-lora_dpo_0907_preference_iclr2023.Q3_K_M.gguf) | Q3_K_M | 3.74GB | | [llama3.1-8b-lora_dpo_0907_preference_iclr2023.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/alexshengzhili_-_llama3.1-8b-lora_dpo_0907_preference_iclr2023-gguf/blob/main/llama3.1-8b-lora_dpo_0907_preference_iclr2023.Q3_K_L.gguf) | Q3_K_L | 4.03GB | | [llama3.1-8b-lora_dpo_0907_preference_iclr2023.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/alexshengzhili_-_llama3.1-8b-lora_dpo_0907_preference_iclr2023-gguf/blob/main/llama3.1-8b-lora_dpo_0907_preference_iclr2023.IQ4_XS.gguf) | IQ4_XS | 4.18GB | | [llama3.1-8b-lora_dpo_0907_preference_iclr2023.Q4_0.gguf](https://huggingface.co/RichardErkhov/alexshengzhili_-_llama3.1-8b-lora_dpo_0907_preference_iclr2023-gguf/blob/main/llama3.1-8b-lora_dpo_0907_preference_iclr2023.Q4_0.gguf) | Q4_0 | 4.34GB | | [llama3.1-8b-lora_dpo_0907_preference_iclr2023.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/alexshengzhili_-_llama3.1-8b-lora_dpo_0907_preference_iclr2023-gguf/blob/main/llama3.1-8b-lora_dpo_0907_preference_iclr2023.IQ4_NL.gguf) | IQ4_NL | 4.38GB | | [llama3.1-8b-lora_dpo_0907_preference_iclr2023.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/alexshengzhili_-_llama3.1-8b-lora_dpo_0907_preference_iclr2023-gguf/blob/main/llama3.1-8b-lora_dpo_0907_preference_iclr2023.Q4_K_S.gguf) | Q4_K_S | 4.37GB | | [llama3.1-8b-lora_dpo_0907_preference_iclr2023.Q4_K.gguf](https://huggingface.co/RichardErkhov/alexshengzhili_-_llama3.1-8b-lora_dpo_0907_preference_iclr2023-gguf/blob/main/llama3.1-8b-lora_dpo_0907_preference_iclr2023.Q4_K.gguf) | Q4_K | 4.58GB | | [llama3.1-8b-lora_dpo_0907_preference_iclr2023.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/alexshengzhili_-_llama3.1-8b-lora_dpo_0907_preference_iclr2023-gguf/blob/main/llama3.1-8b-lora_dpo_0907_preference_iclr2023.Q4_K_M.gguf) | Q4_K_M | 4.58GB | | [llama3.1-8b-lora_dpo_0907_preference_iclr2023.Q4_1.gguf](https://huggingface.co/RichardErkhov/alexshengzhili_-_llama3.1-8b-lora_dpo_0907_preference_iclr2023-gguf/blob/main/llama3.1-8b-lora_dpo_0907_preference_iclr2023.Q4_1.gguf) | Q4_1 | 4.78GB | | [llama3.1-8b-lora_dpo_0907_preference_iclr2023.Q5_0.gguf](https://huggingface.co/RichardErkhov/alexshengzhili_-_llama3.1-8b-lora_dpo_0907_preference_iclr2023-gguf/blob/main/llama3.1-8b-lora_dpo_0907_preference_iclr2023.Q5_0.gguf) | Q5_0 | 5.21GB | | [llama3.1-8b-lora_dpo_0907_preference_iclr2023.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/alexshengzhili_-_llama3.1-8b-lora_dpo_0907_preference_iclr2023-gguf/blob/main/llama3.1-8b-lora_dpo_0907_preference_iclr2023.Q5_K_S.gguf) | Q5_K_S | 5.21GB | | [llama3.1-8b-lora_dpo_0907_preference_iclr2023.Q5_K.gguf](https://huggingface.co/RichardErkhov/alexshengzhili_-_llama3.1-8b-lora_dpo_0907_preference_iclr2023-gguf/blob/main/llama3.1-8b-lora_dpo_0907_preference_iclr2023.Q5_K.gguf) | Q5_K | 5.34GB | | [llama3.1-8b-lora_dpo_0907_preference_iclr2023.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/alexshengzhili_-_llama3.1-8b-lora_dpo_0907_preference_iclr2023-gguf/blob/main/llama3.1-8b-lora_dpo_0907_preference_iclr2023.Q5_K_M.gguf) | Q5_K_M | 5.34GB | | [llama3.1-8b-lora_dpo_0907_preference_iclr2023.Q5_1.gguf](https://huggingface.co/RichardErkhov/alexshengzhili_-_llama3.1-8b-lora_dpo_0907_preference_iclr2023-gguf/blob/main/llama3.1-8b-lora_dpo_0907_preference_iclr2023.Q5_1.gguf) | Q5_1 | 5.65GB | | [llama3.1-8b-lora_dpo_0907_preference_iclr2023.Q6_K.gguf](https://huggingface.co/RichardErkhov/alexshengzhili_-_llama3.1-8b-lora_dpo_0907_preference_iclr2023-gguf/blob/main/llama3.1-8b-lora_dpo_0907_preference_iclr2023.Q6_K.gguf) | Q6_K | 6.14GB | | [llama3.1-8b-lora_dpo_0907_preference_iclr2023.Q8_0.gguf](https://huggingface.co/RichardErkhov/alexshengzhili_-_llama3.1-8b-lora_dpo_0907_preference_iclr2023-gguf/blob/main/llama3.1-8b-lora_dpo_0907_preference_iclr2023.Q8_0.gguf) | Q8_0 | 7.95GB | Original model description: --- library_name: transformers tags: - llama-factory --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
gianrp6/menslovers1
gianrp6
"2025-05-04T01:05:12Z"
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:mit", "region:us" ]
text-to-image
"2025-05-04T01:02:44Z"
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: '-' output: url: images/23e.jpg base_model: black-forest-labs/FLUX.1-dev instance_prompt: suck pecs license: mit --- # menlovers1 <Gallery /> ## Trigger words You should use `suck pecs` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/gianrp6/menslovers1/tree/main) them in the Files & versions tab.