Search is not available for this dataset
modelId
stringlengths
5
138
author
stringlengths
2
42
last_modified
unknowndate
2020-02-15 11:33:14
2025-05-05 00:43:32
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
447 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
unknowndate
2022-03-02 23:29:04
2025-05-05 00:43:21
card
stringlengths
11
1.01M
naresh810/tinyllama-legal-opinion-lora
naresh810
"2025-04-03T21:52:40Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2025-04-03T21:52:33Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
HanningZhang/Distill_Qwen_1.5b_scalebio_ours
HanningZhang
"2025-04-03T21:51:36Z"
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-04-03T21:49:59Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
JZEILE/jackz
JZEILE
"2025-04-03T21:50:57Z"
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
"2025-04-03T21:21:30Z"
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: jackz --- # Jackz <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `jackz` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "jackz", "lora_weights": "https://huggingface.co/JZEILE/jackz/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [๐Ÿงจ diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('JZEILE/jackz', weight_name='lora.safetensors') image = pipeline('jackz').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/JZEILE/jackz/discussions) to add images that show off what youโ€™ve made with this LoRA.
elmurod1202/bertbek-news-classifier
elmurod1202
"2025-04-03T21:45:34Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "uz", "dataset:elmurod1202/daryo_news_categorized", "base_model:elmurod1202/bertbek-news-big-cased", "base_model:finetune:elmurod1202/bertbek-news-big-cased", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2025-04-03T11:30:28Z"
--- library_name: transformers license: mit base_model: elmurod1202/bertbek-news-big-cased tags: - generated_from_trainer model-index: - name: bertbek-news-classifier results: [] datasets: - elmurod1202/daryo_news_categorized language: - uz metrics: - accuracy pipeline_tag: text-classification --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bertbek-news-classifier This model is a fine-tuned version of [elmurod1202/bertbek-news-big-cased](https://huggingface.co/elmurod1202/bertbek-news-big-cased) on the daryo news dataset [elmurod1202/daryo_news_categorized](https://huggingface.co/datasets/elmurod1202/daryo_news_categorized). It achieves the following results on the evaluation set: - Loss: 0.2955 ## Model description BERTbek model fine-tuned for text classification ## Intended uses & limitations Text classification model for Uzbek texts ## Training and evaluation data Daryo news dataset: [elmurod1202/daryo_news_categorized](https://huggingface.co/datasets/elmurod1202/daryo_news_categorized) ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.22 | 1.0 | 3378 | 0.1993 | | 0.1194 | 2.0 | 6756 | 0.2308 | | 0.0633 | 3.0 | 10134 | 0.2955 | ### Framework versions - Transformers 4.50.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
MrRobotoAI/A3.5-Q4_K_M-GGUF
MrRobotoAI
"2025-04-03T21:45:00Z"
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:MrRobotoAI/A3.5", "base_model:quantized:MrRobotoAI/A3.5", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-04-03T21:44:37Z"
--- base_model: MrRobotoAI/A3.5 library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo --- # MrRobotoAI/A3.5-Q4_K_M-GGUF This model was converted to GGUF format from [`MrRobotoAI/A3.5`](https://huggingface.co/MrRobotoAI/A3.5) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/MrRobotoAI/A3.5) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo MrRobotoAI/A3.5-Q4_K_M-GGUF --hf-file a3.5-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo MrRobotoAI/A3.5-Q4_K_M-GGUF --hf-file a3.5-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo MrRobotoAI/A3.5-Q4_K_M-GGUF --hf-file a3.5-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo MrRobotoAI/A3.5-Q4_K_M-GGUF --hf-file a3.5-q4_k_m.gguf -c 2048 ```
MrRobotoAI/A2.5-Q4_K_M-GGUF
MrRobotoAI
"2025-04-03T21:41:50Z"
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:MrRobotoAI/A2.5", "base_model:quantized:MrRobotoAI/A2.5", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-04-03T21:41:28Z"
--- base_model: MrRobotoAI/A2.5 library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo --- # MrRobotoAI/A2.5-Q4_K_M-GGUF This model was converted to GGUF format from [`MrRobotoAI/A2.5`](https://huggingface.co/MrRobotoAI/A2.5) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/MrRobotoAI/A2.5) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo MrRobotoAI/A2.5-Q4_K_M-GGUF --hf-file a2.5-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo MrRobotoAI/A2.5-Q4_K_M-GGUF --hf-file a2.5-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo MrRobotoAI/A2.5-Q4_K_M-GGUF --hf-file a2.5-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo MrRobotoAI/A2.5-Q4_K_M-GGUF --hf-file a2.5-q4_k_m.gguf -c 2048 ```
marekbartos/marek
marekbartos
"2025-04-03T21:41:35Z"
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
"2025-04-03T20:01:45Z"
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: coalbrainmb --- # Marek <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `coalbrainmb` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "coalbrainmb", "lora_weights": "https://huggingface.co/marekbartos/marek/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [๐Ÿงจ diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('marekbartos/marek', weight_name='lora.safetensors') image = pipeline('coalbrainmb').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 6000 - Learning rate: 0.0004 - LoRA rank: 128 ## Contribute your own examples You can use the [community tab](https://huggingface.co/marekbartos/marek/discussions) to add images that show off what youโ€™ve made with this LoRA.
sahithimuppavaram/instruction-finetuned-openhermes
sahithimuppavaram
"2025-04-03T21:40:43Z"
0
0
transformers
[ "transformers", "safetensors", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2025-04-03T20:36:06Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
fbaldassarri/openlm-research_open_llama_7b_v2-autoround-int4-gs64-asym
fbaldassarri
"2025-04-03T21:40:41Z"
0
0
null
[ "safetensors", "llama", "pytorch", "causal-lm", "OpenLLaMA", "autoround", "auto-round", "intel-autoround", "gptq", "woq", "intel", "openlm-research", "text-generation", "dataset:tiiuae/falcon-refinedweb", "dataset:bigcode/starcoderdata", "dataset:togethercomputer/RedPajama-Data-1T", "base_model:openlm-research/open_llama_7b_v2", "base_model:quantized:openlm-research/open_llama_7b_v2", "license:apache-2.0", "4-bit", "intel/auto-round", "region:us" ]
text-generation
"2025-04-03T21:39:18Z"
--- tags: - pytorch - causal-lm - OpenLLaMA - autoround - auto-round - intel-autoround - gptq - woq - intel - pytorch - openlm-research license: apache-2.0 datasets: - tiiuae/falcon-refinedweb - bigcode/starcoderdata - togethercomputer/RedPajama-Data-1T model_name: OpenLLaMA 7B v2 base_model: - openlm-research/open_llama_7b_v2 inference: false model_creator: openlm-research pipeline_tag: text-generation prompt_template: '{prompt} ' quantized_by: fbaldassarri --- ## Model Information Quantized version of [openlm-research/open_llama_7b_v2](https://huggingface.co/openlm-research/open_llama_7b_v2) using torch.float32 for quantization tuning. - 4 bits (INT4) - group size = 64 - Asymmetrical Quantization - Method WoQ (AutoRound format) Fast and low memory, 2-3X speedup (slight accuracy drop at W4G64) Quantization framework: [Intel AutoRound](https://github.com/intel/auto-round) v0.4.6 Note: this INT4 version of open_llama_7b_v2 has been quantized to run inference through CPU. ## Replication Recipe ### Step 1 Install Requirements I suggest to install requirements into a dedicated python-virtualenv or a conda enviroment. ``` wget https://github.com/intel/auto-round/archive/refs/tags/v0.4.6.tar.gz tar -xvzf v0.4.6.tar.gz cd auto-round-0.4.6 pip install -r requirements-cpu.txt --upgrade ``` ### Step 2 Build Intel AutoRound wheel from sources ``` pip install -vvv --no-build-isolation -e .[cpu] ``` ### Step 3 Script for Quantization ``` from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "openlm-research/open_llama_7b_v2" model = AutoModelForCausalLM.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) from auto_round import AutoRound bits, group_size, sym, device, amp = 4, 64, False, 'cpu', False autoround = AutoRound(model, tokenizer, nsamples=128, iters=200, seqlen=512, batch_size=4, bits=bits, group_size=group_size, sym=sym, device=device, amp=amp) autoround.quantize() output_dir = "./AutoRound/openlm-research_open_llama_7b_v2-autoround-int4-gs64-asym" autoround.save_quantized(output_dir, format='auto_round', inplace=True) ``` ## License [Apache 2.0 License](https://choosealicense.com/licenses/apache-2.0/) ## Disclaimer This quantized model comes with no warranty. It has been developed only for research purposes.
lesso15/703cf9af-741e-4beb-902b-ffd0d9f4abb3
lesso15
"2025-04-03T21:38:45Z"
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:samoline/03095c79-dc92-4086-9b23-22c749dc4958", "base_model:adapter:samoline/03095c79-dc92-4086-9b23-22c749dc4958", "region:us" ]
null
"2025-04-03T20:28:06Z"
--- library_name: peft base_model: samoline/03095c79-dc92-4086-9b23-22c749dc4958 tags: - axolotl - generated_from_trainer model-index: - name: 703cf9af-741e-4beb-902b-ffd0d9f4abb3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: samoline/03095c79-dc92-4086-9b23-22c749dc4958 bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - e7a797db7872e4ed_train_data.json ds_type: json format: custom path: /workspace/input_data/e7a797db7872e4ed_train_data.json type: field_input: input field_instruction: instruction field_output: output format: '{instruction} {input}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null do_eval: true early_stopping_patience: 3 eval_batch_size: 4 eval_max_new_tokens: 128 eval_steps: 500 evals_per_epoch: null flash_attention: true fp16: false fsdp: null fsdp_config: null gradient_accumulation_steps: 8 gradient_checkpointing: true group_by_length: true hub_model_id: lesso15/703cf9af-741e-4beb-902b-ffd0d9f4abb3 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.000215 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 50 lora_alpha: 128 lora_dropout: 0.15 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 64 lora_target_linear: true lr_scheduler: cosine max_grad_norm: 1.0 max_steps: 500 micro_batch_size: 4 mlflow_experiment_name: /tmp/e7a797db7872e4ed_train_data.json model_type: AutoModelForCausalLM num_epochs: 10 optimizer: adamw_torch_fused output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false save_steps: 500 saves_per_epoch: null seed: 150 sequence_len: 1024 strict: false tf32: true tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 23810326-a89c-4024-b1fb-e8e0edd1d0ff wandb_project: 15a wandb_run: your_name wandb_runid: 23810326-a89c-4024-b1fb-e8e0edd1d0ff warmup_steps: 100 weight_decay: 0.0 xformers_attention: null ``` </details><br> # 703cf9af-741e-4beb-902b-ffd0d9f4abb3 This model is a fine-tuned version of [samoline/03095c79-dc92-4086-9b23-22c749dc4958](https://huggingface.co/samoline/03095c79-dc92-4086-9b23-22c749dc4958) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7167 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.000215 - train_batch_size: 4 - eval_batch_size: 4 - seed: 150 - gradient_accumulation_steps: 8 - total_train_batch_size: 32 - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 100 - training_steps: 500 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0002 | 1 | 0.7012 | | 0.7713 | 0.1144 | 500 | 0.7167 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
MrRobotoAI/A1.5-Q4_K_M-GGUF
MrRobotoAI
"2025-04-03T21:38:38Z"
0
0
transformers
[ "transformers", "gguf", "mergekit", "merge", "llama-cpp", "gguf-my-repo", "base_model:MrRobotoAI/A1.5", "base_model:quantized:MrRobotoAI/A1.5", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-04-03T21:38:16Z"
--- base_model: MrRobotoAI/A1.5 library_name: transformers tags: - mergekit - merge - llama-cpp - gguf-my-repo --- # MrRobotoAI/A1.5-Q4_K_M-GGUF This model was converted to GGUF format from [`MrRobotoAI/A1.5`](https://huggingface.co/MrRobotoAI/A1.5) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/MrRobotoAI/A1.5) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo MrRobotoAI/A1.5-Q4_K_M-GGUF --hf-file a1.5-q4_k_m.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo MrRobotoAI/A1.5-Q4_K_M-GGUF --hf-file a1.5-q4_k_m.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo MrRobotoAI/A1.5-Q4_K_M-GGUF --hf-file a1.5-q4_k_m.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo MrRobotoAI/A1.5-Q4_K_M-GGUF --hf-file a1.5-q4_k_m.gguf -c 2048 ```
MinaMila/phi3_Adult_5ep_22
MinaMila
"2025-04-03T21:36:37Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:unsloth/Phi-3.5-mini-instruct", "base_model:finetune:unsloth/Phi-3.5-mini-instruct", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2025-03-28T04:52:16Z"
--- base_model: unsloth/Phi-3.5-mini-instruct tags: - text-generation-inference - transformers - unsloth - llama - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** MinaMila - **License:** apache-2.0 - **Finetuned from model :** unsloth/Phi-3.5-mini-instruct This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Hosseinka/qwen2-vl-run_lr5e-5_lora_r8lora_alpha16
Hosseinka
"2025-04-03T21:34:28Z"
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:Qwen/Qwen2-VL-7B-Instruct", "base_model:finetune:Qwen/Qwen2-VL-7B-Instruct", "endpoints_compatible", "region:us" ]
null
"2025-04-03T16:20:53Z"
--- base_model: Qwen/Qwen2-VL-7B-Instruct library_name: transformers model_name: qwen2-vl-run_lr5e-5_lora_r8lora_alpha16 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for qwen2-vl-run_lr5e-5_lora_r8lora_alpha16 This model is a fine-tuned version of [Qwen/Qwen2-VL-7B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-7B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="Hosseinka/qwen2-vl-run_lr5e-5_lora_r8lora_alpha16", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/hosseinksh/qwen2-vl-run_lr5e-5_lora_r8lora_alpha16/runs/78j18kp3) This model was trained with SFT. ### Framework versions - TRL: 0.16.0 - Transformers: 4.50.3 - Pytorch: 2.4.1+cu121 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
sliu72/Qwen2.5-7B-Instruct-Q8_0-GGUF
sliu72
"2025-04-03T21:27:08Z"
0
0
transformers
[ "transformers", "gguf", "chat", "llama-cpp", "gguf-my-repo", "text-generation", "en", "base_model:Qwen/Qwen2.5-7B-Instruct", "base_model:quantized:Qwen/Qwen2.5-7B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
"2025-04-03T21:26:34Z"
--- base_model: Qwen/Qwen2.5-7B-Instruct language: - en library_name: transformers license: apache-2.0 license_link: https://huggingface.co/Qwen/Qwen2.5-7B-Instruct/blob/main/LICENSE pipeline_tag: text-generation tags: - chat - llama-cpp - gguf-my-repo --- # sliu72/Qwen2.5-7B-Instruct-Q8_0-GGUF This model was converted to GGUF format from [`Qwen/Qwen2.5-7B-Instruct`](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew (works on Mac and Linux) ```bash brew install llama.cpp ``` Invoke the llama.cpp server or the CLI. ### CLI: ```bash llama-cli --hf-repo sliu72/Qwen2.5-7B-Instruct-Q8_0-GGUF --hf-file qwen2.5-7b-instruct-q8_0.gguf -p "The meaning to life and the universe is" ``` ### Server: ```bash llama-server --hf-repo sliu72/Qwen2.5-7B-Instruct-Q8_0-GGUF --hf-file qwen2.5-7b-instruct-q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. Step 1: Clone llama.cpp from GitHub. ``` git clone https://github.com/ggerganov/llama.cpp ``` Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux). ``` cd llama.cpp && LLAMA_CURL=1 make ``` Step 3: Run inference through the main binary. ``` ./llama-cli --hf-repo sliu72/Qwen2.5-7B-Instruct-Q8_0-GGUF --hf-file qwen2.5-7b-instruct-q8_0.gguf -p "The meaning to life and the universe is" ``` or ``` ./llama-server --hf-repo sliu72/Qwen2.5-7B-Instruct-Q8_0-GGUF --hf-file qwen2.5-7b-instruct-q8_0.gguf -c 2048 ```
TongZheng1999/ProofWriter_gemma-2-9b-it-star-mixed_direct-OF-final_v2_10-2-3Rounds-iter-1
TongZheng1999
"2025-04-03T21:24:37Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "gemma2", "text-generation", "generated_from_trainer", "trl", "sft", "conversational", "base_model:google/gemma-2-9b-it", "base_model:finetune:google/gemma-2-9b-it", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-04-03T19:57:48Z"
--- base_model: google/gemma-2-9b-it library_name: transformers model_name: ProofWriter_gemma-2-9b-it-star-mixed_direct-OF-final_v2_10-2-3Rounds-iter-1 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for ProofWriter_gemma-2-9b-it-star-mixed_direct-OF-final_v2_10-2-3Rounds-iter-1 This model is a fine-tuned version of [google/gemma-2-9b-it](https://huggingface.co/google/gemma-2-9b-it). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="TongZheng1999/ProofWriter_gemma-2-9b-it-star-mixed_direct-OF-final_v2_10-2-3Rounds-iter-1", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/kidzheng/huggingface/runs/yoeeq4k5) This model was trained with SFT. ### Framework versions - TRL: 0.12.0 - Transformers: 4.46.0 - Pytorch: 2.6.0 - Datasets: 3.3.1 - Tokenizers: 0.20.3 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
Elishamwendwa/animetron
Elishamwendwa
"2025-04-03T21:21:36Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2025-04-03T21:21:36Z"
--- license: apache-2.0 ---
zemuwen/qc_op
zemuwen
"2025-04-03T21:20:02Z"
0
0
null
[ "safetensors", "qwen2", "license:apache-2.0", "region:us" ]
null
"2025-04-03T21:15:38Z"
--- license: apache-2.0 ---
TenthWax/civ1
TenthWax
"2025-04-03T21:18:05Z"
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:creativeml-openrail-m", "region:us" ]
text-to-image
"2025-04-03T21:18:00Z"
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: '-' output: url: images/aihandsfeature-800x420.jpg - text: '-' output: url: images/aihandsfeature-800x420.jpg - text: >- a frontal view of a naked woman spreading her legs wide open, shaved genitals output: url: images/00013-2833096682.jpeg.png - text: >- a back view of a naked redhead woman with large breast and spreading her legs open laying on a bed, pubic hair and genitals output: url: images/00026-1559399280.jpeg.png - text: >- a naked cute japanese woman with small breast. She is serving coffee in a starbucks<lora:NSFW_Body_Parts:0.9> output: url: images/00038-618140480.jpeg.png - text: >- full body, a blond very muscular woman with large breast, nipples, pubic hair and genitals. She is a gym holding a protein milkshake output: url: images/00034-4058235487.jpeg.png - text: >- a frontal view of a naked woman spreading her legs wide open, pubic hair shaped like a heart and genitals output: url: images/00019-1516234203.jpeg.png - text: >- a frontal view of a naked woman spreading her legs wide open, pubic hair and genitals output: url: images/00014-90564834.jpeg.png - text: >- a frontal view of a naked woman spreading her legs wide open, pubic hair shaped like a heart and genitals output: url: images/00021-1516234205.jpeg.png - text: >- a frontal view of a naked woman spreading her legs wide open, pubic hair and genitals output: url: images/00017-90564837.jpeg.png base_model: black-forest-labs/FLUX.1-dev instance_prompt: >- nsfw body parts, small breast, large breast, medium breast, ass, pubic hair, genitals, naked license: creativeml-openrail-m --- # faileddetail <Gallery /> ## Trigger words You should use `nsfw body parts` to trigger the image generation. You should use `small breast` to trigger the image generation. You should use `large breast` to trigger the image generation. You should use `medium breast` to trigger the image generation. You should use `ass` to trigger the image generation. You should use `pubic hair` to trigger the image generation. You should use `genitals` to trigger the image generation. You should use `naked` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/TenthWax/civ1/tree/main) them in the Files & versions tab.
mradermacher/ablation-113-newmix-GGUF
mradermacher
"2025-04-03T21:17:58Z"
0
0
transformers
[ "transformers", "gguf", "en", "base_model:shisa-ai/ablation-113-newmix-llama-3.3-70b", "base_model:quantized:shisa-ai/ablation-113-newmix-llama-3.3-70b", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-04-03T20:00:05Z"
--- base_model: shisa-ai/ablation-113-newmix-llama-3.3-70b language: - en library_name: transformers quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/shisa-ai/ablation-113-newmix-llama-3.3-70b <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/ablation-113-newmix-GGUF/resolve/main/ablation-113-newmix.Q2_K.gguf) | Q2_K | 26.5 | | | [GGUF](https://huggingface.co/mradermacher/ablation-113-newmix-GGUF/resolve/main/ablation-113-newmix.Q3_K_S.gguf) | Q3_K_S | 31.0 | | | [GGUF](https://huggingface.co/mradermacher/ablation-113-newmix-GGUF/resolve/main/ablation-113-newmix.Q3_K_M.gguf) | Q3_K_M | 34.4 | lower quality | | [GGUF](https://huggingface.co/mradermacher/ablation-113-newmix-GGUF/resolve/main/ablation-113-newmix.Q3_K_L.gguf) | Q3_K_L | 37.2 | | | [GGUF](https://huggingface.co/mradermacher/ablation-113-newmix-GGUF/resolve/main/ablation-113-newmix.IQ4_XS.gguf) | IQ4_XS | 38.4 | | | [GGUF](https://huggingface.co/mradermacher/ablation-113-newmix-GGUF/resolve/main/ablation-113-newmix.Q4_K_S.gguf) | Q4_K_S | 40.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/ablation-113-newmix-GGUF/resolve/main/ablation-113-newmix.Q4_K_M.gguf) | Q4_K_M | 42.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/ablation-113-newmix-GGUF/resolve/main/ablation-113-newmix.Q5_K_S.gguf) | Q5_K_S | 48.8 | | | [GGUF](https://huggingface.co/mradermacher/ablation-113-newmix-GGUF/resolve/main/ablation-113-newmix.Q5_K_M.gguf) | Q5_K_M | 50.0 | | | [PART 1](https://huggingface.co/mradermacher/ablation-113-newmix-GGUF/resolve/main/ablation-113-newmix.Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/ablation-113-newmix-GGUF/resolve/main/ablation-113-newmix.Q6_K.gguf.part2of2) | Q6_K | 58.0 | very good quality | | [PART 1](https://huggingface.co/mradermacher/ablation-113-newmix-GGUF/resolve/main/ablation-113-newmix.Q8_0.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/ablation-113-newmix-GGUF/resolve/main/ablation-113-newmix.Q8_0.gguf.part2of2) | Q8_0 | 75.1 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
mradermacher/ablation-134-geniac.gbs128.5e6-shisa-v2-llama-3.1-8b-GGUF
mradermacher
"2025-04-03T21:17:28Z"
0
0
transformers
[ "transformers", "gguf", "generated_from_trainer", "en", "dataset:shisa-ai/shisa-v2-best-of-n-athenev2-tulu70b-llama33-only-no-sysprompt", "dataset:shisa-ai/shisa-v2-roleplaying-sft", "dataset:shisa-ai/translation_expanded_master_set_filtered", "dataset:shisa-ai/rewild-set", "dataset:shisa-ai/magpie-ultra-set", "dataset:shisa-ai/magpie-advanced-questions-set", "dataset:shisa-ai/japan-magpie-set", "dataset:shisa-ai/ko_dataset_conversations", "dataset:shisa-ai/tmmluplus_sim", "base_model:shisa-ai/ablation-134-geniac.gbs128.5e6-shisa-v2-llama-3.1-8b", "base_model:quantized:shisa-ai/ablation-134-geniac.gbs128.5e6-shisa-v2-llama-3.1-8b", "license:llama3.1", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-04-03T20:48:38Z"
--- base_model: shisa-ai/ablation-134-geniac.gbs128.5e6-shisa-v2-llama-3.1-8b datasets: - shisa-ai/shisa-v2-best-of-n-athenev2-tulu70b-llama33-only-no-sysprompt - shisa-ai/shisa-v2-roleplaying-sft - shisa-ai/translation_expanded_master_set_filtered - shisa-ai/rewild-set - shisa-ai/magpie-ultra-set - shisa-ai/magpie-advanced-questions-set - shisa-ai/japan-magpie-set - shisa-ai/ko_dataset_conversations - shisa-ai/tmmluplus_sim language: - en library_name: transformers license: llama3.1 quantized_by: mradermacher tags: - generated_from_trainer --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/shisa-ai/ablation-134-geniac.gbs128.5e6-shisa-v2-llama-3.1-8b <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/ablation-134-geniac.gbs128.5e6-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-134-geniac.gbs128.5e6-shisa-v2-llama-3.1-8b.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/ablation-134-geniac.gbs128.5e6-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-134-geniac.gbs128.5e6-shisa-v2-llama-3.1-8b.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/ablation-134-geniac.gbs128.5e6-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-134-geniac.gbs128.5e6-shisa-v2-llama-3.1-8b.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/ablation-134-geniac.gbs128.5e6-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-134-geniac.gbs128.5e6-shisa-v2-llama-3.1-8b.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/ablation-134-geniac.gbs128.5e6-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-134-geniac.gbs128.5e6-shisa-v2-llama-3.1-8b.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/ablation-134-geniac.gbs128.5e6-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-134-geniac.gbs128.5e6-shisa-v2-llama-3.1-8b.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/ablation-134-geniac.gbs128.5e6-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-134-geniac.gbs128.5e6-shisa-v2-llama-3.1-8b.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/ablation-134-geniac.gbs128.5e6-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-134-geniac.gbs128.5e6-shisa-v2-llama-3.1-8b.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/ablation-134-geniac.gbs128.5e6-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-134-geniac.gbs128.5e6-shisa-v2-llama-3.1-8b.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/ablation-134-geniac.gbs128.5e6-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-134-geniac.gbs128.5e6-shisa-v2-llama-3.1-8b.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/ablation-134-geniac.gbs128.5e6-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-134-geniac.gbs128.5e6-shisa-v2-llama-3.1-8b.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/ablation-134-geniac.gbs128.5e6-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-134-geniac.gbs128.5e6-shisa-v2-llama-3.1-8b.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Devtrick/roberta_nli_ensemble
Devtrick
"2025-04-03T21:12:45Z"
30
0
transformers
[ "transformers", "safetensors", "roberta_nli_classifier", "generated_from_trainer", "arxiv:1907.11692", "endpoints_compatible", "region:us" ]
null
"2025-04-02T01:33:46Z"
--- library_name: transformers tags: - generated_from_trainer metrics: - accuracy model-index: - name: roberta_nli_ensemble results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # roberta_nli_ensemble <!-- Provide a quick summary of what the model is/does. --> A fine-tuned RoBERTa model designed for an Natural Language Inference (NLI) task, classifying the relationship between pairs of sentences given a premise and a hypothesis. ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This model builds upon the roberta-base architecture, adding a multi-layer classification head for NLI. It computes average pooled representations of premise and hypothesis tokens (identified via `token_type_ids`) and concatenates them before passing through additional linear and non-linear layers. The final output is used to classify the pair of sentences into one of three classes. - **Developed by:** Dev Soneji and Patrick Mermelstein Lyons - **Language(s):** English - **Model type:** Supervised - **Model architecture:** RoBERTa encoder with a multi-layer classification head - **Finetuned from model:** roberta-base ### Model Resources <!-- Provide links where applicable. --> - **Repository:** [Devtrick/roberta_nli_ensemble](https://huggingface.co/Devtrick/roberta_nli_ensemble) - **Paper or documentation:** [RoBERTa: A Robustly Optimized BERT Pretraining Approach](https://arxiv.org/abs/1907.11692) ## Training Details ### Training Data <!-- This is a short stub of information on the training data that was used, and documentation related to data pre-processing or additional filtering (if applicable). --> The model was trained on a dataset located in `train.csv`. This dataset comprised of 24K premise-hypothesis pairs, with a label to determine if the hypothesis is true based on the premise. The label was binary, 0 = hypothesis is false, 1 = hypothesis is true. No further details were given on the origin and validity of this dataset. The data was passed through a tokenizer ([AutoTokenizer](https://huggingface.co/docs/transformers/v4.50.0/en/model_doc/auto#transformers.AutoTokenizer)), as part of the standard hugging face library. No other pre-processing was done, aside from relabelling columns to match the expected format. ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> The model was trained in the following way: - The model was trained on the following data ([Training Data](#training-data)), with renaming of columns and tokenization. - The model was initialised with a custom configuration class, `roBERTaConfig`, setting essential parameters. The model itself, `roBERTaClassifier` extends the pretrained RoBERTa model to include multiple linear layers for classification and pooling. - Hyperparameter selection was carried out in a seperate grid search to identify the best performing hyperparameters. This resulted in the following parameters - [Training Hyperparameters](#training-hyperparameters). - The model was validated with the following [test data](#testing-data), giving the following [results](#results). - Checkpoints were saved after each epoch, and finally the best checkpoint was reloaded and pushed to the Hugging Face Hub. #### Training Hyperparameters <!-- This is a summary of the values of hyperparameters used in training the model. --> The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 128 - eval_batch_size: 128 - weight_decay: 0.01 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 10 #### Speeds, Sizes, Times <!-- This section provides information about how roughly how long it takes to train the model and the size of the resulting model. --> - Training time: This model took 12 minutes 17 seconds to train on the hardware specified below. It was trained on 10 epochs, however early stopping caused only 5 epochs to train. Model size: 126M parameteres. ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data & Metrics #### Testing Data <!-- This should describe any evaluation data used (e.g., the development/validation set provided). --> The development (and effectively testing) dataset is located in `dev.csv`. This is 6K pairs as validation data, in the same format of the training data. No further details were given on the origin and validity of this dataset. The data was passed through a tokenizer ([AutoTokenizer](https://huggingface.co/docs/transformers/v4.50.0/en/model_doc/auto#transformers.AutoTokenizer)), as part of the standard hugging face library. No other pre-processing was done, aside from relabelling columns to match the expected format. #### Metrics <!-- These are the evaluation metrics being used. --> - Accuracy: Proportion of correct predictions. - Matthews Correlation Coefficient (MCC): Correlation coefficient between predicted and true labels, ranging from -1 to 1. ### Results Final results on the evaluation set: - Loss: 0.4849 - Accuracy: 0.8848 - Mcc: 0.7695 | Training Loss | Epoch | Step | Validation Loss | Accuracy | Mcc | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.6552 | 1.0 | 191 | 0.3383 | 0.8685 | 0.7377 | | 0.2894 | 2.0 | 382 | 0.3045 | 0.8778 | 0.7559 | | 0.1891 | 3.0 | 573 | 0.3255 | 0.8854 | 0.7705 | | 0.1209 | 4.0 | 764 | 0.3963 | 0.8829 | 0.7657 | | 0.0843 | 5.0 | 955 | 0.4849 | 0.8848 | 0.7695 | ## Technical Specifications ### Hardware PC specs the model was trained on: - CPU: AMD Ryzen 7 7700X - GPU: NVIDIA GeForce RTX 5070 Ti - Memory: 32GB DDR5 - Motherboard: MSI MAG B650 TOMAHAWK WIFI Motherboard ### Software - Transformers 4.50.2 - Pytorch 2.8.0.dev20250326+cu128 - Datasets 3.5.0 - Tokenizers 0.21.1 ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> - The model's performance and biases depend on the data on which it was trained, however no details of the data's origin is known so this cannot be commented on. - The risk lies in trusting any labelling with confidence, without manual verification. Models can make mistakes, verify the outputs. - This is limited by the training data not being comprehensive of all possible premise-hypothesis combinations, however this is possible in real life. Additional training and validation data would have been useful. ## Additional Information <!-- Any other information that would be useful for other people to know. --> - This model was pushed to the Hugging Face Hub with `trainer.push_to_hub()` after training locally.
tahamajs/llama-3.2-3b-orpo-lora64-4bit-instruct
tahamajs
"2025-04-03T21:11:59Z"
0
2
transformers
[ "transformers", "safetensors", "unsloth", "dpo", "orpo", "lora", "preference-optimization", "endpoints_compatible", "region:us" ]
null
"2025-04-03T20:56:00Z"
--- library_name: transformers tags: - unsloth - dpo - orpo - lora - preference-optimization --- # Model Card for Llama-3.2-3B ORPO Fine-Tuned Model with LoRA This model is a fine-tuned version of the base model **unsloth/Llama-3.2-3B-Instruct-bnb-4bit** using Odds Ratio Preference Optimization (ORPO) with LoRA-based adaptation. The training leverages a dataset of pairwise (chosen vs. rejected) responses to align the model with human preferences without the need for a separate reward or reference model. ## Model Details ### Model Description This is a fine-tuned language model that has been optimized using ORPOโ€”a direct preference optimization method that eliminates the need for a reference model. The base model, **unsloth/Llama-3.2-3B-Instruct-bnb-4bit**, is adapted using Low-Rank Adaptation (LoRA) with a rank and alpha of 64, allowing for efficient fine-tuning with only a small fraction of the model's parameters updated. The fine-tuning is performed on a dataset consisting of approximately 1,600 examples (sampled from "mlabonne/orpo-dpo-mix-40k"), where the model learns to favor the "chosen" response over the "rejected" one directly through odds ratio optimization. - **Developed by:** [Your Name or Organization] - **Model Type:** Causal Language Model (Instruction-Finetuned) - **Base Model:** unsloth/Llama-3.2-3B-Instruct-bnb-4bit - **Training Method:** ORPO (Odds Ratio Preference Optimization) with LoRA - **Quantization:** 4-bit - **Language:** English (primarily) - **License:** [Specify License, e.g., Apache-2.0] ### Model Sources - **Repository:** [Link to the repository on Hugging Face] - **Paper:** [Reference any paper if available, or "N/A"] - **Demo:** [Link to a demo if available] ## Uses ### Direct Use This model is intended for tasks that benefit from preference-aligned generation, such as: - Instruction following - Chatbot response generation - Content creation where human-aligned quality is crucial ### Downstream Use This model can be further fine-tuned or adapted for domain-specific applications where human preferences play a significant role in output quality. ### Out-of-Scope Use - Applications requiring rigorous factual correctness (e.g., medical or legal advice) without further domain-specific fine-tuning. - Use cases involving sensitive content where model biases could lead to harmful outcomes. ## Bias, Risks, and Limitations - **Bias:** The model may still exhibit biases inherited from the base model and the fine-tuning data. - **Risks:** Users should be cautious in applications where incorrect or biased information could have serious consequences. - **Limitations:** As a fine-tuned model using preference optimization, its performance is tied to the quality and diversity of the training data. It may not generalize well to contexts significantly different from its training set. ### Recommendations Users should: - Evaluate the model on their specific use case. - Monitor outputs for potential bias or factual inaccuracies. - Fine-tune further if necessary to better align with specific requirements. ## How to Get Started with the Model Below is an example code snippet to load and use the model: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model = AutoModelForCausalLM.from_pretrained("your-username/llama-3.2-3b-orpo-lora64") tokenizer = AutoTokenizer.from_pretrained("your-username/llama-3.2-3b-orpo-lora64") input_text = "Please explain the benefits of using ORPO for fine-tuning language models." inputs = tokenizer(input_text, return_tensors="pt").to("cuda") outputs = model.generate(**inputs, max_new_tokens=100) print(tokenizer.decode(outputs[0]))
Etienne248/dqn-SpaceInvadersNoFrameskip-v4
Etienne248
"2025-04-03T21:11:05Z"
0
0
stable-baselines3
[ "stable-baselines3", "SpaceInvadersNoFrameskip-v4", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
"2025-04-03T21:10:47Z"
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 630.00 +/- 201.43 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib SBX (SB3 + Jax): https://github.com/araffin/sbx Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Etienne248 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Etienne248 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Etienne248 ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ``` # Environment Arguments ```python {'render_mode': 'rgb_array'} ```
uoioll/urszula_tekieli_style_LoRA
uoioll
"2025-04-03T21:08:30Z"
0
0
diffusers
[ "diffusers", "tensorboard", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
"2025-04-03T21:08:22Z"
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 library_name: diffusers license: openrail++ instance_prompt: photo collage in Urszula Tekieli style, widget: [] tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - uoioll/urszula_tekieli_style_LoRA <Gallery /> ## Model description These are uoioll/urszula_tekieli_style_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use photo collage in Urszula Tekieli style, to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](uoioll/urszula_tekieli_style_LoRA/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
darwinha/distilbert-base-uncased-finetuned-imdb
darwinha
"2025-04-03T21:07:09Z"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "fill-mask", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2025-04-03T16:34:42Z"
--- library_name: transformers license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-imdb results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-imdb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.4900 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.6903 | 1.0 | 157 | 2.4975 | | 2.5694 | 2.0 | 314 | 2.4703 | | 2.5289 | 3.0 | 471 | 2.4552 | ### Framework versions - Transformers 4.50.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
efficient-speech/lite-whisper-medium-fast
efficient-speech
"2025-04-03T21:05:41Z"
0
0
transformers
[ "transformers", "safetensors", "lite-whisper", "feature-extraction", "audio", "automatic-speech-recognition", "whisper", "hf-asr-leaderboard", "custom_code", "arxiv:2502.20583", "base_model:openai/whisper-medium", "base_model:finetune:openai/whisper-medium", "license:apache-2.0", "region:us" ]
automatic-speech-recognition
"2025-04-03T20:59:26Z"
--- base_model: openai/whisper-medium library_name: transformers license: apache-2.0 pipeline_tag: automatic-speech-recognition tags: - audio - automatic-speech-recognition - whisper - hf-asr-leaderboard --- <!-- Provide a quick summary of what the model is/does. --> Lite-Whisper is a compressed version of OpenAI Whisper with LiteASR. See our [GitHub repository](https://github.com/efeslab/LiteASR) and [paper](https://arxiv.org/abs/2502.20583) for details. ## Benchmark Results Following is the average word error rate (WER) evaluated on the [ESB datasets](https://huggingface.co/datasets/hf-audio/esb-datasets-test-only-sorted): | Model | Average WER (โ†“) | Encoder Size | Decoder Size | |-------|----------------|--------------|--------------| | [whisper-tiny](https://huggingface.co/openai/whisper-tiny) | 22.01 | 7.63M | 29.55M | | [lite-whisper-tiny-acc](https://huggingface.co/efficient-speech/lite-whisper-tiny-acc) | 22.97 | 7.41M | 29.55M | | [lite-whisper-tiny](https://huggingface.co/efficient-speech/lite-whisper-tiny) | 23.95 | 7.00M | 29.55M | | [lite-whisper-tiny-fast](https://huggingface.co/efficient-speech/lite-whisper-tiny-fast) | 27.09 | 6.48M | 29.55M | | &nbsp; | &nbsp; | &nbsp; | &nbsp; | | [whisper-base](https://huggingface.co/openai/whisper-base) | 17.67 | 19.82M | 52.00M | | [lite-whisper-base-acc](https://huggingface.co/efficient-speech/lite-whisper-base-acc) | 19.07 | 18.64M | 52.00M | | [lite-whisper-base](https://huggingface.co/efficient-speech/lite-whisper-base) | 19.71 | 17.44M | 52.00M | | [lite-whisper-base-fast](https://huggingface.co/efficient-speech/lite-whisper-base-fast) | 23.05 | 16.07M | 52.00M | | &nbsp; | &nbsp; | &nbsp; | &nbsp; | | [whisper-small](https://huggingface.co/openai/whisper-small) | 15.89 | 87.00M | 153.58M | | [lite-whisper-small-acc](https://huggingface.co/efficient-speech/lite-whisper-small-acc) | 15.37 | 76.99M | 153.58M | | [lite-whisper-small](https://huggingface.co/efficient-speech/lite-whisper-small) | 14.96 | 70.16M | 153.58M | | [lite-whisper-small-fast](https://huggingface.co/efficient-speech/lite-whisper-small-fast) | 14.92 | 63.11M | 153.58M | | &nbsp; | &nbsp; | &nbsp; | &nbsp; | | [whisper-medium](https://huggingface.co/openai/whisper-medium) | 15.12 | 305.68M | 456.64M | | [lite-whisper-medium-acc](https://huggingface.co/efficient-speech/lite-whisper-medium-acc) | 13.46 | 269.93M | 456.64M | | [lite-whisper-medium](https://huggingface.co/efficient-speech/lite-whisper-medium) | 14.50 | 239.99M | 456.64M | | [lite-whisper-medium-fast](https://huggingface.co/efficient-speech/lite-whisper-medium-fast) | 14.52 | 215.31M | 456.64M | ## Citation If you use LiteASR in your research, please cite the following paper: ``` @misc{kamahori2025liteasrefficientautomaticspeech, title={LiteASR: Efficient Automatic Speech Recognition with Low-Rank Approximation}, author={Keisuke Kamahori and Jungo Kasai and Noriyuki Kojima and Baris Kasikci}, year={2025}, eprint={2502.20583}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2502.20583}, } ```
efficient-speech/lite-whisper-small
efficient-speech
"2025-04-03T21:04:53Z"
0
0
transformers
[ "transformers", "safetensors", "lite-whisper", "feature-extraction", "audio", "automatic-speech-recognition", "whisper", "hf-asr-leaderboard", "custom_code", "arxiv:2502.20583", "base_model:openai/whisper-small", "base_model:finetune:openai/whisper-small", "license:apache-2.0", "region:us" ]
automatic-speech-recognition
"2025-04-03T20:52:04Z"
--- base_model: openai/whisper-small library_name: transformers license: apache-2.0 pipeline_tag: automatic-speech-recognition tags: - audio - automatic-speech-recognition - whisper - hf-asr-leaderboard --- <!-- Provide a quick summary of what the model is/does. --> Lite-Whisper is a compressed version of OpenAI Whisper with LiteASR. See our [GitHub repository](https://github.com/efeslab/LiteASR) and [paper](https://arxiv.org/abs/2502.20583) for details. ## Benchmark Results Following is the average word error rate (WER) evaluated on the [ESB datasets](https://huggingface.co/datasets/hf-audio/esb-datasets-test-only-sorted): | Model | Average WER (โ†“) | Encoder Size | Decoder Size | |-------|----------------|--------------|--------------| | [whisper-tiny](https://huggingface.co/openai/whisper-tiny) | 22.01 | 7.63M | 29.55M | | [lite-whisper-tiny-acc](https://huggingface.co/efficient-speech/lite-whisper-tiny-acc) | 22.97 | 7.41M | 29.55M | | [lite-whisper-tiny](https://huggingface.co/efficient-speech/lite-whisper-tiny) | 23.95 | 7.00M | 29.55M | | [lite-whisper-tiny-fast](https://huggingface.co/efficient-speech/lite-whisper-tiny-fast) | 27.09 | 6.48M | 29.55M | | &nbsp; | &nbsp; | &nbsp; | &nbsp; | | [whisper-base](https://huggingface.co/openai/whisper-base) | 17.67 | 19.82M | 52.00M | | [lite-whisper-base-acc](https://huggingface.co/efficient-speech/lite-whisper-base-acc) | 19.07 | 18.64M | 52.00M | | [lite-whisper-base](https://huggingface.co/efficient-speech/lite-whisper-base) | 19.71 | 17.44M | 52.00M | | [lite-whisper-base-fast](https://huggingface.co/efficient-speech/lite-whisper-base-fast) | 23.05 | 16.07M | 52.00M | | &nbsp; | &nbsp; | &nbsp; | &nbsp; | | [whisper-small](https://huggingface.co/openai/whisper-small) | 15.89 | 87.00M | 153.58M | | [lite-whisper-small-acc](https://huggingface.co/efficient-speech/lite-whisper-small-acc) | 15.37 | 76.99M | 153.58M | | [lite-whisper-small](https://huggingface.co/efficient-speech/lite-whisper-small) | 14.96 | 70.16M | 153.58M | | [lite-whisper-small-fast](https://huggingface.co/efficient-speech/lite-whisper-small-fast) | 14.92 | 63.11M | 153.58M | | &nbsp; | &nbsp; | &nbsp; | &nbsp; | | [whisper-medium](https://huggingface.co/openai/whisper-medium) | 15.12 | 305.68M | 456.64M | | [lite-whisper-medium-acc](https://huggingface.co/efficient-speech/lite-whisper-medium-acc) | 13.46 | 269.93M | 456.64M | | [lite-whisper-medium](https://huggingface.co/efficient-speech/lite-whisper-medium) | 14.50 | 239.99M | 456.64M | | [lite-whisper-medium-fast](https://huggingface.co/efficient-speech/lite-whisper-medium-fast) | 14.52 | 215.31M | 456.64M | ## Citation If you use LiteASR in your research, please cite the following paper: ``` @misc{kamahori2025liteasrefficientautomaticspeech, title={LiteASR: Efficient Automatic Speech Recognition with Low-Rank Approximation}, author={Keisuke Kamahori and Jungo Kasai and Noriyuki Kojima and Baris Kasikci}, year={2025}, eprint={2502.20583}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2502.20583}, } ```
efficient-speech/lite-whisper-base
efficient-speech
"2025-04-03T21:04:02Z"
0
0
transformers
[ "transformers", "safetensors", "lite-whisper", "feature-extraction", "audio", "automatic-speech-recognition", "whisper", "hf-asr-leaderboard", "custom_code", "arxiv:2502.20583", "base_model:openai/whisper-base", "base_model:finetune:openai/whisper-base", "license:apache-2.0", "region:us" ]
automatic-speech-recognition
"2025-04-03T20:50:20Z"
--- base_model: openai/whisper-base library_name: transformers license: apache-2.0 pipeline_tag: automatic-speech-recognition tags: - audio - automatic-speech-recognition - whisper - hf-asr-leaderboard --- <!-- Provide a quick summary of what the model is/does. --> Lite-Whisper is a compressed version of OpenAI Whisper with LiteASR. See our [GitHub repository](https://github.com/efeslab/LiteASR) and [paper](https://arxiv.org/abs/2502.20583) for details. ## Benchmark Results Following is the average word error rate (WER) evaluated on the [ESB datasets](https://huggingface.co/datasets/hf-audio/esb-datasets-test-only-sorted): | Model | Average WER (โ†“) | Encoder Size | Decoder Size | |-------|----------------|--------------|--------------| | [whisper-tiny](https://huggingface.co/openai/whisper-tiny) | 22.01 | 7.63M | 29.55M | | [lite-whisper-tiny-acc](https://huggingface.co/efficient-speech/lite-whisper-tiny-acc) | 22.97 | 7.41M | 29.55M | | [lite-whisper-tiny](https://huggingface.co/efficient-speech/lite-whisper-tiny) | 23.95 | 7.00M | 29.55M | | [lite-whisper-tiny-fast](https://huggingface.co/efficient-speech/lite-whisper-tiny-fast) | 27.09 | 6.48M | 29.55M | | &nbsp; | &nbsp; | &nbsp; | &nbsp; | | [whisper-base](https://huggingface.co/openai/whisper-base) | 17.67 | 19.82M | 52.00M | | [lite-whisper-base-acc](https://huggingface.co/efficient-speech/lite-whisper-base-acc) | 19.07 | 18.64M | 52.00M | | [lite-whisper-base](https://huggingface.co/efficient-speech/lite-whisper-base) | 19.71 | 17.44M | 52.00M | | [lite-whisper-base-fast](https://huggingface.co/efficient-speech/lite-whisper-base-fast) | 23.05 | 16.07M | 52.00M | | &nbsp; | &nbsp; | &nbsp; | &nbsp; | | [whisper-small](https://huggingface.co/openai/whisper-small) | 15.89 | 87.00M | 153.58M | | [lite-whisper-small-acc](https://huggingface.co/efficient-speech/lite-whisper-small-acc) | 15.37 | 76.99M | 153.58M | | [lite-whisper-small](https://huggingface.co/efficient-speech/lite-whisper-small) | 14.96 | 70.16M | 153.58M | | [lite-whisper-small-fast](https://huggingface.co/efficient-speech/lite-whisper-small-fast) | 14.92 | 63.11M | 153.58M | | &nbsp; | &nbsp; | &nbsp; | &nbsp; | | [whisper-medium](https://huggingface.co/openai/whisper-medium) | 15.12 | 305.68M | 456.64M | | [lite-whisper-medium-acc](https://huggingface.co/efficient-speech/lite-whisper-medium-acc) | 13.46 | 269.93M | 456.64M | | [lite-whisper-medium](https://huggingface.co/efficient-speech/lite-whisper-medium) | 14.50 | 239.99M | 456.64M | | [lite-whisper-medium-fast](https://huggingface.co/efficient-speech/lite-whisper-medium-fast) | 14.52 | 215.31M | 456.64M | ## Citation If you use LiteASR in your research, please cite the following paper: ``` @misc{kamahori2025liteasrefficientautomaticspeech, title={LiteASR: Efficient Automatic Speech Recognition with Low-Rank Approximation}, author={Keisuke Kamahori and Jungo Kasai and Noriyuki Kojima and Baris Kasikci}, year={2025}, eprint={2502.20583}, archivePrefix={arXiv}, primaryClass={cs.LG}, url={https://arxiv.org/abs/2502.20583}, } ```
genki10/BERT_AugV8_k3_task1_organization_sp020_lw040_fold2
genki10
"2025-04-03T21:03:39Z"
4
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2025-03-25T07:59:19Z"
--- library_name: transformers license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer model-index: - name: BERT_AugV8_k3_task1_organization_sp020_lw040_fold2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BERT_AugV8_k3_task1_organization_sp020_lw040_fold2 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7989 - Qwk: 0.2778 - Mse: 0.7991 - Rmse: 0.8939 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 150 ### Training results | Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:| | No log | 1.0 | 3 | 8.4335 | 0.0 | 8.4339 | 2.9041 | | No log | 2.0 | 6 | 4.8952 | 0.0203 | 4.8957 | 2.2126 | | No log | 3.0 | 9 | 3.1673 | 0.0 | 3.1677 | 1.7798 | | No log | 4.0 | 12 | 1.9505 | 0.0700 | 1.9510 | 1.3968 | | No log | 5.0 | 15 | 1.3534 | 0.0107 | 1.3539 | 1.1636 | | No log | 6.0 | 18 | 0.9310 | 0.0 | 0.9315 | 0.9651 | | No log | 7.0 | 21 | 1.0587 | 0.0067 | 1.0591 | 1.0291 | | No log | 8.0 | 24 | 0.8247 | 0.2499 | 0.8250 | 0.9083 | | No log | 9.0 | 27 | 0.9349 | 0.1281 | 0.9352 | 0.9671 | | No log | 10.0 | 30 | 0.7192 | 0.4041 | 0.7196 | 0.8483 | | No log | 11.0 | 33 | 0.7330 | 0.3158 | 0.7335 | 0.8564 | | No log | 12.0 | 36 | 0.7938 | 0.3043 | 0.7939 | 0.8910 | | No log | 13.0 | 39 | 0.5902 | 0.5299 | 0.5903 | 0.7683 | | No log | 14.0 | 42 | 1.3043 | 0.2418 | 1.3044 | 1.1421 | | No log | 15.0 | 45 | 0.5436 | 0.4035 | 0.5434 | 0.7372 | | No log | 16.0 | 48 | 0.6578 | 0.3225 | 0.6576 | 0.8109 | | No log | 17.0 | 51 | 0.5686 | 0.4605 | 0.5688 | 0.7542 | | No log | 18.0 | 54 | 0.8095 | 0.4449 | 0.8097 | 0.8998 | | No log | 19.0 | 57 | 0.5088 | 0.5028 | 0.5087 | 0.7132 | | No log | 20.0 | 60 | 0.5904 | 0.4177 | 0.5902 | 0.7682 | | No log | 21.0 | 63 | 0.6185 | 0.4196 | 0.6186 | 0.7865 | | No log | 22.0 | 66 | 0.5203 | 0.4824 | 0.5203 | 0.7213 | | No log | 23.0 | 69 | 0.5511 | 0.4847 | 0.5512 | 0.7424 | | No log | 24.0 | 72 | 0.6307 | 0.4383 | 0.6311 | 0.7944 | | No log | 25.0 | 75 | 0.5619 | 0.5237 | 0.5621 | 0.7497 | | No log | 26.0 | 78 | 0.6441 | 0.4665 | 0.6443 | 0.8027 | | No log | 27.0 | 81 | 0.5903 | 0.4874 | 0.5904 | 0.7684 | | No log | 28.0 | 84 | 0.7989 | 0.2778 | 0.7991 | 0.8939 | ### Framework versions - Transformers 4.47.0 - Pytorch 2.5.1+cu121 - Datasets 3.3.1 - Tokenizers 0.21.0
mradermacher/ablation-129-shisav2.gbs128.2e5-shisa-v2-llama-3.1-8b-GGUF
mradermacher
"2025-04-03T21:00:16Z"
0
0
transformers
[ "transformers", "gguf", "generated_from_trainer", "en", "dataset:shisa-ai/shisa-v2-best-of-n-athenev2-tulu70b-llama33-only-no-sysprompt", "dataset:shisa-ai/shisa-v2-roleplaying-sft", "dataset:shisa-ai/translation_expanded_master_set_filtered", "dataset:shisa-ai/rewild-set", "dataset:shisa-ai/magpie-ultra-set", "dataset:shisa-ai/magpie-advanced-questions-set", "dataset:shisa-ai/japan-magpie-set", "base_model:shisa-ai/ablation-129-shisav2.gbs128.2e5-shisa-v2-llama-3.1-8b", "base_model:quantized:shisa-ai/ablation-129-shisav2.gbs128.2e5-shisa-v2-llama-3.1-8b", "license:llama3.1", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-04-03T20:37:35Z"
--- base_model: shisa-ai/ablation-129-shisav2.gbs128.2e5-shisa-v2-llama-3.1-8b datasets: - shisa-ai/shisa-v2-best-of-n-athenev2-tulu70b-llama33-only-no-sysprompt - shisa-ai/shisa-v2-roleplaying-sft - shisa-ai/translation_expanded_master_set_filtered - shisa-ai/rewild-set - shisa-ai/magpie-ultra-set - shisa-ai/magpie-advanced-questions-set - shisa-ai/japan-magpie-set language: - en library_name: transformers license: llama3.1 quantized_by: mradermacher tags: - generated_from_trainer --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/shisa-ai/ablation-129-shisav2.gbs128.2e5-shisa-v2-llama-3.1-8b <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/ablation-129-shisav2.gbs128.2e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-129-shisav2.gbs128.2e5-shisa-v2-llama-3.1-8b.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/ablation-129-shisav2.gbs128.2e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-129-shisav2.gbs128.2e5-shisa-v2-llama-3.1-8b.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/ablation-129-shisav2.gbs128.2e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-129-shisav2.gbs128.2e5-shisa-v2-llama-3.1-8b.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/ablation-129-shisav2.gbs128.2e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-129-shisav2.gbs128.2e5-shisa-v2-llama-3.1-8b.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/ablation-129-shisav2.gbs128.2e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-129-shisav2.gbs128.2e5-shisa-v2-llama-3.1-8b.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/ablation-129-shisav2.gbs128.2e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-129-shisav2.gbs128.2e5-shisa-v2-llama-3.1-8b.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/ablation-129-shisav2.gbs128.2e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-129-shisav2.gbs128.2e5-shisa-v2-llama-3.1-8b.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/ablation-129-shisav2.gbs128.2e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-129-shisav2.gbs128.2e5-shisa-v2-llama-3.1-8b.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/ablation-129-shisav2.gbs128.2e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-129-shisav2.gbs128.2e5-shisa-v2-llama-3.1-8b.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/ablation-129-shisav2.gbs128.2e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-129-shisav2.gbs128.2e5-shisa-v2-llama-3.1-8b.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/ablation-129-shisav2.gbs128.2e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-129-shisav2.gbs128.2e5-shisa-v2-llama-3.1-8b.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/ablation-129-shisav2.gbs128.2e5-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-129-shisav2.gbs128.2e5-shisa-v2-llama-3.1-8b.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
Machlovi/Safe_Phi4
Machlovi
"2025-04-03T20:58:42Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2025-02-05T19:35:25Z"
--- base_model: unsloth/Phi-4-unsloth-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl --- ## How to Get Started with the Model ## ๐Ÿš€ **How to Use This Model for Inference** This model is fine-tuned using **LoRA (PEFT)** on **Phi-4 (4-bit Unsloth)**. To use it, you need to: 1. Load the **base model** 2. Load the **LoRA adapter** 3. Run inference ### **๐Ÿ“Œ Install Required Libraries** Before running the code, make sure you have the necessary dependencies installed: ```bash pip install unsloth peft transformers torch ``` ### **๐Ÿ“ Load and Run Inference** ```bash from unsloth import FastLanguageModel from peft import PeftModel import torch # Load the base model base_model_name = "unsloth/Phi-4-unsloth-bnb-4bit" model, tokenizer = FastLanguageModel.from_pretrained( model_name=base_model_name, max_seq_length=4096, # Must match fine-tuning load_in_4bit=True, ) # Load the fine-tuned LoRA adapter lora_model_name = "Machlovi/Phi_Fullshot" model = PeftModel.from_pretrained(model, lora_model_name) # Run inference input_text = "Why do we need to go to see something?" inputs = tokenizer(input_text, return_tensors="pt").to("cuda") with torch.no_grad(): outputs = model.generate(**inputs, max_new_tokens=4) # Decode and print response response = tokenizer.decode(outputs[0], skip_special_tokens=True) ``` ### **๐Ÿ’ก Notes** - This model is **quantized in 4-bit** for efficiency. - Ensure `max_seq_length` matches the training configuration. - This model requires a **GPU (CUDA)** for inference. [More Information Needed] # Uploaded model - **Developed by:** Machlovi - **License:** apache-2.0 - **Finetuned from model :** unsloth/Phi-4-unsloth-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
jmalejandrob79/cndnlsh18
jmalejandrob79
"2025-04-03T20:57:06Z"
14
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
"2025-04-02T20:20:43Z"
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: cndnlsh18 --- # Cndnlsh18 <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `cndnlsh18` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "cndnlsh18", "lora_weights": "https://huggingface.co/jmalejandrob79/cndnlsh18/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [๐Ÿงจ diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('jmalejandrob79/cndnlsh18', weight_name='lora.safetensors') image = pipeline('cndnlsh18').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 4000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/jmalejandrob79/cndnlsh18/discussions) to add images that show off what youโ€™ve made with this LoRA.
Cshavi/de-alignment_llama-3.1-1b-38k
Cshavi
"2025-04-03T20:56:46Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "llama", "trl", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2025-04-03T20:56:42Z"
--- base_model: unsloth/llama-3.2-1b-instruct-bnb-4bit tags: - text-generation-inference - transformers - unsloth - llama - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** Cshavi - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-1b-instruct-bnb-4bit This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
RichardErkhov/Perfect7613_-_llama-3.2-3b-it-Medical-ChatBot-gguf
RichardErkhov
"2025-04-03T20:55:13Z"
0
0
null
[ "gguf", "arxiv:1910.09700", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-04-03T18:41:23Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) llama-3.2-3b-it-Medical-ChatBot - GGUF - Model creator: https://huggingface.co/Perfect7613/ - Original model: https://huggingface.co/Perfect7613/llama-3.2-3b-it-Medical-ChatBot/ | Name | Quant method | Size | | ---- | ---- | ---- | | [llama-3.2-3b-it-Medical-ChatBot.Q2_K.gguf](https://huggingface.co/RichardErkhov/Perfect7613_-_llama-3.2-3b-it-Medical-ChatBot-gguf/blob/main/llama-3.2-3b-it-Medical-ChatBot.Q2_K.gguf) | Q2_K | 1.27GB | | [llama-3.2-3b-it-Medical-ChatBot.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/Perfect7613_-_llama-3.2-3b-it-Medical-ChatBot-gguf/blob/main/llama-3.2-3b-it-Medical-ChatBot.IQ3_XS.gguf) | IQ3_XS | 1.38GB | | [llama-3.2-3b-it-Medical-ChatBot.IQ3_S.gguf](https://huggingface.co/RichardErkhov/Perfect7613_-_llama-3.2-3b-it-Medical-ChatBot-gguf/blob/main/llama-3.2-3b-it-Medical-ChatBot.IQ3_S.gguf) | IQ3_S | 1.44GB | | [llama-3.2-3b-it-Medical-ChatBot.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/Perfect7613_-_llama-3.2-3b-it-Medical-ChatBot-gguf/blob/main/llama-3.2-3b-it-Medical-ChatBot.Q3_K_S.gguf) | Q3_K_S | 1.44GB | | [llama-3.2-3b-it-Medical-ChatBot.IQ3_M.gguf](https://huggingface.co/RichardErkhov/Perfect7613_-_llama-3.2-3b-it-Medical-ChatBot-gguf/blob/main/llama-3.2-3b-it-Medical-ChatBot.IQ3_M.gguf) | IQ3_M | 1.49GB | | [llama-3.2-3b-it-Medical-ChatBot.Q3_K.gguf](https://huggingface.co/RichardErkhov/Perfect7613_-_llama-3.2-3b-it-Medical-ChatBot-gguf/blob/main/llama-3.2-3b-it-Medical-ChatBot.Q3_K.gguf) | Q3_K | 1.57GB | | [llama-3.2-3b-it-Medical-ChatBot.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/Perfect7613_-_llama-3.2-3b-it-Medical-ChatBot-gguf/blob/main/llama-3.2-3b-it-Medical-ChatBot.Q3_K_M.gguf) | Q3_K_M | 1.57GB | | [llama-3.2-3b-it-Medical-ChatBot.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/Perfect7613_-_llama-3.2-3b-it-Medical-ChatBot-gguf/blob/main/llama-3.2-3b-it-Medical-ChatBot.Q3_K_L.gguf) | Q3_K_L | 1.69GB | | [llama-3.2-3b-it-Medical-ChatBot.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/Perfect7613_-_llama-3.2-3b-it-Medical-ChatBot-gguf/blob/main/llama-3.2-3b-it-Medical-ChatBot.IQ4_XS.gguf) | IQ4_XS | 1.71GB | | [llama-3.2-3b-it-Medical-ChatBot.Q4_0.gguf](https://huggingface.co/RichardErkhov/Perfect7613_-_llama-3.2-3b-it-Medical-ChatBot-gguf/blob/main/llama-3.2-3b-it-Medical-ChatBot.Q4_0.gguf) | Q4_0 | 1.79GB | | [llama-3.2-3b-it-Medical-ChatBot.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/Perfect7613_-_llama-3.2-3b-it-Medical-ChatBot-gguf/blob/main/llama-3.2-3b-it-Medical-ChatBot.IQ4_NL.gguf) | IQ4_NL | 1.79GB | | [llama-3.2-3b-it-Medical-ChatBot.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/Perfect7613_-_llama-3.2-3b-it-Medical-ChatBot-gguf/blob/main/llama-3.2-3b-it-Medical-ChatBot.Q4_K_S.gguf) | Q4_K_S | 1.8GB | | [llama-3.2-3b-it-Medical-ChatBot.Q4_K.gguf](https://huggingface.co/RichardErkhov/Perfect7613_-_llama-3.2-3b-it-Medical-ChatBot-gguf/blob/main/llama-3.2-3b-it-Medical-ChatBot.Q4_K.gguf) | Q4_K | 1.88GB | | [llama-3.2-3b-it-Medical-ChatBot.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/Perfect7613_-_llama-3.2-3b-it-Medical-ChatBot-gguf/blob/main/llama-3.2-3b-it-Medical-ChatBot.Q4_K_M.gguf) | Q4_K_M | 1.88GB | | [llama-3.2-3b-it-Medical-ChatBot.Q4_1.gguf](https://huggingface.co/RichardErkhov/Perfect7613_-_llama-3.2-3b-it-Medical-ChatBot-gguf/blob/main/llama-3.2-3b-it-Medical-ChatBot.Q4_1.gguf) | Q4_1 | 1.95GB | | [llama-3.2-3b-it-Medical-ChatBot.Q5_0.gguf](https://huggingface.co/RichardErkhov/Perfect7613_-_llama-3.2-3b-it-Medical-ChatBot-gguf/blob/main/llama-3.2-3b-it-Medical-ChatBot.Q5_0.gguf) | Q5_0 | 2.11GB | | [llama-3.2-3b-it-Medical-ChatBot.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/Perfect7613_-_llama-3.2-3b-it-Medical-ChatBot-gguf/blob/main/llama-3.2-3b-it-Medical-ChatBot.Q5_K_S.gguf) | Q5_K_S | 2.11GB | | [llama-3.2-3b-it-Medical-ChatBot.Q5_K.gguf](https://huggingface.co/RichardErkhov/Perfect7613_-_llama-3.2-3b-it-Medical-ChatBot-gguf/blob/main/llama-3.2-3b-it-Medical-ChatBot.Q5_K.gguf) | Q5_K | 2.16GB | | [llama-3.2-3b-it-Medical-ChatBot.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/Perfect7613_-_llama-3.2-3b-it-Medical-ChatBot-gguf/blob/main/llama-3.2-3b-it-Medical-ChatBot.Q5_K_M.gguf) | Q5_K_M | 2.16GB | | [llama-3.2-3b-it-Medical-ChatBot.Q5_1.gguf](https://huggingface.co/RichardErkhov/Perfect7613_-_llama-3.2-3b-it-Medical-ChatBot-gguf/blob/main/llama-3.2-3b-it-Medical-ChatBot.Q5_1.gguf) | Q5_1 | 2.28GB | | [llama-3.2-3b-it-Medical-ChatBot.Q6_K.gguf](https://huggingface.co/RichardErkhov/Perfect7613_-_llama-3.2-3b-it-Medical-ChatBot-gguf/blob/main/llama-3.2-3b-it-Medical-ChatBot.Q6_K.gguf) | Q6_K | 2.46GB | | [llama-3.2-3b-it-Medical-ChatBot.Q8_0.gguf](https://huggingface.co/RichardErkhov/Perfect7613_-_llama-3.2-3b-it-Medical-ChatBot-gguf/blob/main/llama-3.2-3b-it-Medical-ChatBot.Q8_0.gguf) | Q8_0 | 3.19GB | Original model description: --- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Raciocinio/emersonrafael
Raciocinio
"2025-04-03T20:52:51Z"
0
0
null
[ "license:other", "region:us" ]
null
"2025-04-03T20:18:08Z"
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md ---
hongyunjeong/ungeup9-1small
hongyunjeong
"2025-04-03T20:51:27Z"
0
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:unsloth/Meta-Llama-3.1-8B", "base_model:quantized:unsloth/Meta-Llama-3.1-8B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2025-04-03T20:48:15Z"
--- base_model: unsloth/Meta-Llama-3.1-8B tags: - text-generation-inference - transformers - unsloth - llama - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** hongyunjeong - **License:** apache-2.0 - **Finetuned from model :** unsloth/Meta-Llama-3.1-8B This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
0xbkr/brelokx
0xbkr
"2025-04-03T20:48:19Z"
0
0
diffusers
[ "diffusers", "text-to-image", "flux", "lora", "template:sd-lora", "fluxgym", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
"2025-04-03T20:48:18Z"
--- tags: - text-to-image - flux - lora - diffusers - template:sd-lora - fluxgym base_model: black-forest-labs/FLUX.1-dev instance_prompt: brelokx license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- # brelokx A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym) <Gallery /> ## Trigger words You should use `brelokx` to trigger the image generation. ## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc. Weights for this model are available in Safetensors format.
RonanT/RL_Example
RonanT
"2025-04-03T20:48:17Z"
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
"2025-04-03T19:40:55Z"
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 249.07 +/- 22.07 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
0xbkr/brelok
0xbkr
"2025-04-03T20:48:17Z"
0
0
diffusers
[ "diffusers", "text-to-image", "flux", "lora", "template:sd-lora", "fluxgym", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
"2025-04-03T20:48:11Z"
--- tags: - text-to-image - flux - lora - diffusers - template:sd-lora - fluxgym base_model: black-forest-labs/FLUX.1-dev instance_prompt: brelok license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- # brelok A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym) <Gallery /> ## Trigger words You should use `brelok` to trigger the image generation. ## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc. Weights for this model are available in Safetensors format.
mradermacher/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b-GGUF
mradermacher
"2025-04-03T20:47:18Z"
0
0
transformers
[ "transformers", "gguf", "generated_from_trainer", "en", "base_model:shisa-ai/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b", "base_model:quantized:shisa-ai/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-04-03T20:12:04Z"
--- base_model: shisa-ai/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b language: - en library_name: transformers model_name: outputs/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b quantized_by: mradermacher tags: - generated_from_trainer --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/shisa-ai/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b-GGUF/resolve/main/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b.Q2_K.gguf) | Q2_K | 4.9 | | | [GGUF](https://huggingface.co/mradermacher/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b-GGUF/resolve/main/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b.Q3_K_S.gguf) | Q3_K_S | 5.6 | | | [GGUF](https://huggingface.co/mradermacher/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b-GGUF/resolve/main/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b.Q3_K_M.gguf) | Q3_K_M | 6.2 | lower quality | | [GGUF](https://huggingface.co/mradermacher/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b-GGUF/resolve/main/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b.Q3_K_L.gguf) | Q3_K_L | 6.7 | | | [GGUF](https://huggingface.co/mradermacher/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b-GGUF/resolve/main/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b.IQ4_XS.gguf) | IQ4_XS | 6.9 | | | [GGUF](https://huggingface.co/mradermacher/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b-GGUF/resolve/main/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b.Q4_K_S.gguf) | Q4_K_S | 7.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b-GGUF/resolve/main/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b.Q4_K_M.gguf) | Q4_K_M | 7.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b-GGUF/resolve/main/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b.Q5_K_S.gguf) | Q5_K_S | 8.6 | | | [GGUF](https://huggingface.co/mradermacher/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b-GGUF/resolve/main/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b.Q5_K_M.gguf) | Q5_K_M | 8.8 | | | [GGUF](https://huggingface.co/mradermacher/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b-GGUF/resolve/main/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b.Q6_K.gguf) | Q6_K | 10.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b-GGUF/resolve/main/ablation-122-a114.dpo.armorm.rp-shisa-v2-unphi-4-14b.Q8_0.gguf) | Q8_0 | 13.1 | fast, best quality | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
dropxtor/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-dappled_slender_scorpion
dropxtor
"2025-04-03T20:43:57Z"
3
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am dappled slender scorpion", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-04-01T14:34:31Z"
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-dappled_slender_scorpion tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am dappled slender scorpion - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-dappled_slender_scorpion This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="dropxtor/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-dappled_slender_scorpion", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.50.3 - Pytorch: 2.5.1 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
hangytong/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-secretive_pale_crab
hangytong
"2025-04-03T20:40:14Z"
3
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am secretive pale crab", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-04-02T07:38:26Z"
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-secretive_pale_crab tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am secretive pale crab - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-secretive_pale_crab This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="hangytong/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-secretive_pale_crab", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.50.3 - Pytorch: 2.5.1 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
przemek-tranda/soulz
przemek-tranda
"2025-04-03T20:39:09Z"
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
"2025-04-03T20:03:08Z"
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: soulz --- # Soulz <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `soulz` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "soulz", "lora_weights": "https://huggingface.co/przemek-tranda/soulz/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [๐Ÿงจ diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('przemek-tranda/soulz', weight_name='lora.safetensors') image = pipeline('soulz').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 2000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/przemek-tranda/soulz/discussions) to add images that show off what youโ€™ve made with this LoRA.
jahyungu/Llama-3.2-1B-Instruct_Sky-T1-7B-step2-distill-5k
jahyungu
"2025-04-03T20:35:31Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "generated_from_trainer", "conversational", "base_model:meta-llama/Llama-3.2-1B-Instruct", "base_model:finetune:meta-llama/Llama-3.2-1B-Instruct", "license:llama3.2", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-04-03T19:54:54Z"
--- library_name: transformers license: llama3.2 base_model: meta-llama/Llama-3.2-1B-Instruct tags: - generated_from_trainer model-index: - name: Llama-3.2-1B-Instruct_Sky-T1-7B-step2-distill-5k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Llama-3.2-1B-Instruct_Sky-T1-7B-step2-distill-5k This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.50.0 - Pytorch 2.6.0+cu124 - Datasets 3.4.1 - Tokenizers 0.21.0
askorbinkayo/ii_gena_LoRA
askorbinkayo
"2025-04-03T20:35:25Z"
0
0
diffusers
[ "diffusers", "tensorboard", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
"2025-04-03T20:35:18Z"
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 library_name: diffusers license: openrail++ instance_prompt: picture in GENA style widget: [] tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - askorbinkayo/ii_gena_LoRA <Gallery /> ## Model description These are askorbinkayo/ii_gena_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use picture in GENA style to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](askorbinkayo/ii_gena_LoRA/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
TareksTesting/UNNAMED-MODEL-2A
TareksTesting
"2025-04-03T20:32:52Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2311.03099", "base_model:TareksLab/Anathema-V8-LLaMA-70B", "base_model:merge:TareksLab/Anathema-V8-LLaMA-70B", "base_model:TareksLab/Cortex-V4-LLaMA-70B", "base_model:merge:TareksLab/Cortex-V4-LLaMA-70B", "base_model:TareksLab/RolePlayer-V6-LLaMa-70B", "base_model:merge:TareksLab/RolePlayer-V6-LLaMa-70B", "base_model:TareksLab/Scrivener-Base-V6-LLaMA-70B", "base_model:merge:TareksLab/Scrivener-Base-V6-LLaMA-70B", "base_model:TareksLab/Wordsmith-V7-LLaMa-70B", "base_model:merge:TareksLab/Wordsmith-V7-LLaMa-70B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-04-03T19:54:49Z"
--- base_model: - TareksLab/RolePlayer-V6-LLaMa-70B - TareksLab/Cortex-V4-LLaMA-70B - TareksLab/Anathema-V8-LLaMA-70B - TareksLab/Wordsmith-V7-LLaMa-70B - TareksLab/Scrivener-Base-V6-LLaMA-70B library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE TIES](https://arxiv.org/abs/2311.03099) merge method using [TareksLab/Scrivener-Base-V6-LLaMA-70B](https://huggingface.co/TareksLab/Scrivener-Base-V6-LLaMA-70B) as a base. ### Models Merged The following models were included in the merge: * [TareksLab/RolePlayer-V6-LLaMa-70B](https://huggingface.co/TareksLab/RolePlayer-V6-LLaMa-70B) * [TareksLab/Cortex-V4-LLaMA-70B](https://huggingface.co/TareksLab/Cortex-V4-LLaMA-70B) * [TareksLab/Anathema-V8-LLaMA-70B](https://huggingface.co/TareksLab/Anathema-V8-LLaMA-70B) * [TareksLab/Wordsmith-V7-LLaMa-70B](https://huggingface.co/TareksLab/Wordsmith-V7-LLaMa-70B) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: TareksLab/Wordsmith-V7-LLaMa-70B parameters: weight: 0.20 density: 0.5 - model: TareksLab/Anathema-V8-LLaMA-70B parameters: weight: 0.20 density: 0.5 - model: TareksLab/Scrivener-Base-V6-LLaMA-70B parameters: weight: 0.20 density: 0.5 - model: TareksLab/RolePlayer-V6-LLaMa-70B parameters: weight: 0.20 density: 0.5 - model: TareksLab/Cortex-V4-LLaMA-70B parameters: weight: 0.20 density: 0.5 merge_method: dare_ties base_model: TareksLab/Scrivener-Base-V6-LLaMA-70B parameters: normalize: false out_dtype: bfloat16 chat_template: llama3 tokenizer: source: TareksLab/Cortex-V4-LLaMA-70B ```
TabAnd58/bert-finetuned-ner
TabAnd58
"2025-04-03T20:29:01Z"
0
0
transformers
[ "transformers", "safetensors", "bert", "token-classification", "generated_from_trainer", "base_model:BAAI/bge-small-en-v1.5", "base_model:finetune:BAAI/bge-small-en-v1.5", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2025-04-03T20:17:05Z"
--- library_name: transformers license: mit base_model: BAAI/bge-small-en-v1.5 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: bert-finetuned-ner results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [BAAI/bge-small-en-v1.5](https://huggingface.co/BAAI/bge-small-en-v1.5) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0870 - Precision: 0.9061 - Recall: 0.9254 - F1: 0.9157 - Accuracy: 0.9824 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.216 | 1.0 | 1250 | 0.1314 | 0.8038 | 0.8714 | 0.8362 | 0.9699 | | 0.1001 | 2.0 | 2500 | 0.0932 | 0.8790 | 0.9061 | 0.8924 | 0.9784 | | 0.0656 | 3.0 | 3750 | 0.0844 | 0.8813 | 0.9145 | 0.8976 | 0.9793 | | 0.0506 | 4.0 | 5000 | 0.0885 | 0.8915 | 0.9261 | 0.9085 | 0.9799 | | 0.0397 | 5.0 | 6250 | 0.0823 | 0.8969 | 0.9251 | 0.9108 | 0.9815 | | 0.0307 | 6.0 | 7500 | 0.0826 | 0.8974 | 0.9246 | 0.9108 | 0.9813 | | 0.0249 | 7.0 | 8750 | 0.0840 | 0.8985 | 0.9238 | 0.9110 | 0.9815 | | 0.0207 | 8.0 | 10000 | 0.0846 | 0.9088 | 0.9238 | 0.9162 | 0.9824 | | 0.0169 | 9.0 | 11250 | 0.0857 | 0.9022 | 0.9254 | 0.9137 | 0.9820 | | 0.0158 | 10.0 | 12500 | 0.0870 | 0.9061 | 0.9254 | 0.9157 | 0.9824 | ### Framework versions - Transformers 4.50.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
genki10/BERT_AugV8_k3_task1_organization_sp020_lw030_fold4
genki10
"2025-04-03T20:28:19Z"
4
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2025-03-25T07:23:07Z"
--- library_name: transformers license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer model-index: - name: BERT_AugV8_k3_task1_organization_sp020_lw030_fold4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BERT_AugV8_k3_task1_organization_sp020_lw030_fold4 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.3749 - Qwk: 0.2502 - Mse: 1.3749 - Rmse: 1.1726 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 150 ### Training results | Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:| | No log | 1.0 | 3 | 8.1824 | 0.0 | 8.1824 | 2.8605 | | No log | 2.0 | 6 | 5.0496 | 0.0109 | 5.0496 | 2.2471 | | No log | 3.0 | 9 | 3.3551 | 0.0040 | 3.3551 | 1.8317 | | No log | 4.0 | 12 | 2.9167 | 0.0040 | 2.9167 | 1.7078 | | No log | 5.0 | 15 | 1.7583 | 0.0445 | 1.7583 | 1.3260 | | No log | 6.0 | 18 | 1.2818 | 0.0212 | 1.2818 | 1.1322 | | No log | 7.0 | 21 | 1.0392 | 0.0212 | 1.0392 | 1.0194 | | No log | 8.0 | 24 | 0.9833 | 0.0489 | 0.9833 | 0.9916 | | No log | 9.0 | 27 | 0.9321 | 0.0957 | 0.9321 | 0.9655 | | No log | 10.0 | 30 | 0.9489 | 0.0962 | 0.9489 | 0.9741 | | No log | 11.0 | 33 | 0.8293 | 0.4601 | 0.8293 | 0.9106 | | No log | 12.0 | 36 | 1.0543 | 0.3402 | 1.0543 | 1.0268 | | No log | 13.0 | 39 | 0.9430 | 0.3220 | 0.9430 | 0.9711 | | No log | 14.0 | 42 | 1.1953 | 0.1918 | 1.1953 | 1.0933 | | No log | 15.0 | 45 | 0.9429 | 0.3617 | 0.9429 | 0.9710 | | No log | 16.0 | 48 | 1.0814 | 0.3464 | 1.0814 | 1.0399 | | No log | 17.0 | 51 | 0.9447 | 0.4427 | 0.9447 | 0.9720 | | No log | 18.0 | 54 | 1.5971 | 0.2825 | 1.5971 | 1.2638 | | No log | 19.0 | 57 | 1.1033 | 0.4043 | 1.1033 | 1.0504 | | No log | 20.0 | 60 | 1.4624 | 0.3004 | 1.4624 | 1.2093 | | No log | 21.0 | 63 | 1.1444 | 0.3836 | 1.1444 | 1.0698 | | No log | 22.0 | 66 | 1.1949 | 0.3501 | 1.1949 | 1.0931 | | No log | 23.0 | 69 | 1.1154 | 0.3456 | 1.1154 | 1.0561 | | No log | 24.0 | 72 | 1.4104 | 0.3019 | 1.4104 | 1.1876 | | No log | 25.0 | 75 | 1.2564 | 0.3091 | 1.2564 | 1.1209 | | No log | 26.0 | 78 | 1.3749 | 0.2502 | 1.3749 | 1.1726 | ### Framework versions - Transformers 4.47.0 - Pytorch 2.5.1+cu121 - Datasets 3.3.1 - Tokenizers 0.21.0
Shero448/cflation-illu
Shero448
"2025-04-03T20:28:03Z"
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:robb-0/TheArtist-Style-IllustriousXL", "base_model:adapter:robb-0/TheArtist-Style-IllustriousXL", "region:us" ]
text-to-image
"2025-04-03T20:27:41Z"
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: >- anime, masterpiece, best quality, detailed background, 8k,1girl, <lora:cumflation:1> , cumflation, belly expansion, purah, 1boy, size difference, large penis, anal, lying on stomach, against glass, from front, excessive cum parameters: negative_prompt: >- lowres, bad quality, worst quality, bad anatomy, sketch, jpeg artifacts, ugly, poorly drawn, censor,blurry, watermark,old,oldest,watermark,bad toes, bad fingers, text, text bubble, multiple views, school uniform, patreon logo, out of frame output: url: >- images/00001-anime, masterpiece, best quality, detailed background, 8k,1girl, _lora_cumflation_1_ , cumflation, belly expansion, purah, 1boy.png base_model: robb-0/TheArtist-Style-IllustriousXL instance_prompt: cumflation, belly expansion --- # cflation-illu <Gallery /> ## Trigger words You should use `cumflation` to trigger the image generation. You should use `belly expansion` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/Shero448/cflation-illu/tree/main) them in the Files & versions tab.
KotaroKinoshita/yomitoku-layout-parser-rtdtrv2-v2
KotaroKinoshita
"2025-04-03T20:26:34Z"
0
0
null
[ "safetensors", "model_hub_mixin", "pytorch_model_hub_mixin", "region:us" ]
null
"2025-04-03T20:26:10Z"
--- tags: - model_hub_mixin - pytorch_model_hub_mixin --- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration: - Library: [More Information Needed] - Docs: [More Information Needed]
mradermacher/Stoicism1_Phi3.5-mini-instruct-GGUF
mradermacher
"2025-04-03T20:25:15Z"
0
0
transformers
[ "transformers", "gguf", "text-generation-inference", "unsloth", "llama", "trl", "sft", "en", "base_model:AlbertoB12/Stoicism1_Phi3.5-mini-instruct", "base_model:quantized:AlbertoB12/Stoicism1_Phi3.5-mini-instruct", "license:cc-by-4.0", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-04-03T17:28:13Z"
--- base_model: AlbertoB12/Stoicism1_Phi3.5-mini-instruct language: - en library_name: transformers license: cc-by-4.0 quantized_by: mradermacher tags: - text-generation-inference - transformers - unsloth - llama - trl - sft --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/AlbertoB12/Stoicism1_Phi3.5-mini-instruct <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/Stoicism1_Phi3.5-mini-instruct-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Stoicism1_Phi3.5-mini-instruct-GGUF/resolve/main/Stoicism1_Phi3.5-mini-instruct.Q2_K.gguf) | Q2_K | 1.5 | | | [GGUF](https://huggingface.co/mradermacher/Stoicism1_Phi3.5-mini-instruct-GGUF/resolve/main/Stoicism1_Phi3.5-mini-instruct.Q3_K_S.gguf) | Q3_K_S | 1.8 | | | [GGUF](https://huggingface.co/mradermacher/Stoicism1_Phi3.5-mini-instruct-GGUF/resolve/main/Stoicism1_Phi3.5-mini-instruct.Q3_K_M.gguf) | Q3_K_M | 2.0 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Stoicism1_Phi3.5-mini-instruct-GGUF/resolve/main/Stoicism1_Phi3.5-mini-instruct.Q3_K_L.gguf) | Q3_K_L | 2.1 | | | [GGUF](https://huggingface.co/mradermacher/Stoicism1_Phi3.5-mini-instruct-GGUF/resolve/main/Stoicism1_Phi3.5-mini-instruct.IQ4_XS.gguf) | IQ4_XS | 2.2 | | | [GGUF](https://huggingface.co/mradermacher/Stoicism1_Phi3.5-mini-instruct-GGUF/resolve/main/Stoicism1_Phi3.5-mini-instruct.Q4_K_S.gguf) | Q4_K_S | 2.3 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Stoicism1_Phi3.5-mini-instruct-GGUF/resolve/main/Stoicism1_Phi3.5-mini-instruct.Q4_K_M.gguf) | Q4_K_M | 2.4 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Stoicism1_Phi3.5-mini-instruct-GGUF/resolve/main/Stoicism1_Phi3.5-mini-instruct.Q5_K_S.gguf) | Q5_K_S | 2.7 | | | [GGUF](https://huggingface.co/mradermacher/Stoicism1_Phi3.5-mini-instruct-GGUF/resolve/main/Stoicism1_Phi3.5-mini-instruct.Q5_K_M.gguf) | Q5_K_M | 2.8 | | | [GGUF](https://huggingface.co/mradermacher/Stoicism1_Phi3.5-mini-instruct-GGUF/resolve/main/Stoicism1_Phi3.5-mini-instruct.Q6_K.gguf) | Q6_K | 3.2 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Stoicism1_Phi3.5-mini-instruct-GGUF/resolve/main/Stoicism1_Phi3.5-mini-instruct.Q8_0.gguf) | Q8_0 | 4.2 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Stoicism1_Phi3.5-mini-instruct-GGUF/resolve/main/Stoicism1_Phi3.5-mini-instruct.f16.gguf) | f16 | 7.7 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
genki10/BERT_AugV8_k3_task1_organization_sp020_lw030_fold3
genki10
"2025-04-03T20:21:06Z"
4
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2025-03-25T07:13:21Z"
--- library_name: transformers license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer model-index: - name: BERT_AugV8_k3_task1_organization_sp020_lw030_fold3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BERT_AugV8_k3_task1_organization_sp020_lw030_fold3 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0606 - Qwk: 0.3523 - Mse: 1.0607 - Rmse: 1.0299 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 150 ### Training results | Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:| | No log | 1.0 | 3 | 9.8583 | 0.0018 | 9.8566 | 3.1395 | | No log | 2.0 | 6 | 6.9476 | 0.0002 | 6.9461 | 2.6355 | | No log | 3.0 | 9 | 4.8466 | 0.0250 | 4.8453 | 2.2012 | | No log | 4.0 | 12 | 3.6078 | 0.0038 | 3.6069 | 1.8992 | | No log | 5.0 | 15 | 2.1557 | 0.1503 | 2.1550 | 1.4680 | | No log | 6.0 | 18 | 2.0821 | 0.0440 | 2.0811 | 1.4426 | | No log | 7.0 | 21 | 1.4057 | 0.0302 | 1.4050 | 1.1853 | | No log | 8.0 | 24 | 1.0612 | 0.0365 | 1.0607 | 1.0299 | | No log | 9.0 | 27 | 0.8100 | 0.3550 | 0.8096 | 0.8998 | | No log | 10.0 | 30 | 1.0159 | 0.0953 | 1.0155 | 1.0077 | | No log | 11.0 | 33 | 0.9867 | 0.1343 | 0.9864 | 0.9932 | | No log | 12.0 | 36 | 0.7023 | 0.4473 | 0.7024 | 0.8381 | | No log | 13.0 | 39 | 0.6716 | 0.4789 | 0.6720 | 0.8197 | | No log | 14.0 | 42 | 0.6881 | 0.4228 | 0.6886 | 0.8298 | | No log | 15.0 | 45 | 0.9623 | 0.3555 | 0.9627 | 0.9812 | | No log | 16.0 | 48 | 0.6409 | 0.4799 | 0.6415 | 0.8009 | | No log | 17.0 | 51 | 0.6242 | 0.4968 | 0.6247 | 0.7904 | | No log | 18.0 | 54 | 0.7232 | 0.4728 | 0.7237 | 0.8507 | | No log | 19.0 | 57 | 0.8762 | 0.4176 | 0.8766 | 0.9363 | | No log | 20.0 | 60 | 0.7242 | 0.4773 | 0.7249 | 0.8514 | | No log | 21.0 | 63 | 0.8218 | 0.4462 | 0.8223 | 0.9068 | | No log | 22.0 | 66 | 0.9877 | 0.3748 | 0.9879 | 0.9939 | | No log | 23.0 | 69 | 0.7740 | 0.4838 | 0.7748 | 0.8802 | | No log | 24.0 | 72 | 1.3164 | 0.2495 | 1.3160 | 1.1472 | | No log | 25.0 | 75 | 1.3457 | 0.2485 | 1.3452 | 1.1598 | | No log | 26.0 | 78 | 0.7355 | 0.4914 | 0.7362 | 0.8580 | | No log | 27.0 | 81 | 0.6711 | 0.4714 | 0.6715 | 0.8194 | | No log | 28.0 | 84 | 1.1469 | 0.3297 | 1.1467 | 1.0708 | | No log | 29.0 | 87 | 0.6755 | 0.4932 | 0.6761 | 0.8222 | | No log | 30.0 | 90 | 0.6891 | 0.4816 | 0.6898 | 0.8305 | | No log | 31.0 | 93 | 1.3251 | 0.2592 | 1.3250 | 1.1511 | | No log | 32.0 | 96 | 1.0606 | 0.3523 | 1.0607 | 1.0299 | ### Framework versions - Transformers 4.47.0 - Pytorch 2.5.1+cu121 - Datasets 3.3.1 - Tokenizers 0.21.0
khalilbibi/gemma-product-description
khalilbibi
"2025-04-03T20:11:33Z"
0
0
transformers
[ "transformers", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:google/gemma-3-4b-it", "base_model:finetune:google/gemma-3-4b-it", "endpoints_compatible", "region:us" ]
null
"2025-04-03T19:19:47Z"
--- base_model: google/gemma-3-4b-it library_name: transformers model_name: gemma-product-description tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for gemma-product-description This model is a fine-tuned version of [google/gemma-3-4b-it](https://huggingface.co/google/gemma-3-4b-it). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="khalilbibi/gemma-product-description", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.17.0.dev0 - Transformers: 4.50.0.dev0 - Pytorch: 2.6.0 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF
mradermacher
"2025-04-03T20:10:22Z"
0
0
transformers
[ "transformers", "gguf", "agent", "coding", "en", "base_model:JackCloudman/openhands-lm-32b-v0.1-jackterated", "base_model:quantized:JackCloudman/openhands-lm-32b-v0.1-jackterated", "license:mit", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
null
"2025-04-03T13:48:45Z"
--- base_model: JackCloudman/openhands-lm-32b-v0.1-jackterated language: - en library_name: transformers license: mit quantized_by: mradermacher tags: - agent - coding --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/JackCloudman/openhands-lm-32b-v0.1-jackterated <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-IQ1_S.gguf) | i1-IQ1_S | 7.4 | for the desperate | | [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-IQ1_M.gguf) | i1-IQ1_M | 8.0 | mostly desperate | | [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 9.1 | | | [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-IQ2_XS.gguf) | i1-IQ2_XS | 10.1 | | | [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-IQ2_S.gguf) | i1-IQ2_S | 10.5 | | | [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-IQ2_M.gguf) | i1-IQ2_M | 11.4 | | | [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-Q2_K_S.gguf) | i1-Q2_K_S | 11.6 | very low quality | | [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-Q2_K.gguf) | i1-Q2_K | 12.4 | IQ3_XXS probably better | | [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 12.9 | lower quality | | [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-IQ3_XS.gguf) | i1-IQ3_XS | 13.8 | | | [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-Q3_K_S.gguf) | i1-Q3_K_S | 14.5 | IQ3_XS probably better | | [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-IQ3_S.gguf) | i1-IQ3_S | 14.5 | beats Q3_K* | | [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-IQ3_M.gguf) | i1-IQ3_M | 14.9 | | | [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-Q3_K_M.gguf) | i1-Q3_K_M | 16.0 | IQ3_S probably better | | [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-Q3_K_L.gguf) | i1-Q3_K_L | 17.3 | IQ3_M probably better | | [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-IQ4_XS.gguf) | i1-IQ4_XS | 17.8 | | | [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-Q4_0.gguf) | i1-Q4_0 | 18.8 | fast, low quality | | [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-Q4_K_S.gguf) | i1-Q4_K_S | 18.9 | optimal size/speed/quality | | [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-Q4_K_M.gguf) | i1-Q4_K_M | 20.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-Q4_1.gguf) | i1-Q4_1 | 20.7 | | | [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-Q5_K_S.gguf) | i1-Q5_K_S | 22.7 | | | [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-Q5_K_M.gguf) | i1-Q5_K_M | 23.4 | | | [GGUF](https://huggingface.co/mradermacher/openhands-lm-32b-v0.1-jackterated-i1-GGUF/resolve/main/openhands-lm-32b-v0.1-jackterated.i1-Q6_K.gguf) | i1-Q6_K | 27.0 | practically like static Q6_K | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to. <!-- end -->
Serbero2025/mediapanel2
Serbero2025
"2025-04-03T20:09:36Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2025-04-03T20:07:37Z"
--- license: apache-2.0 ---
ahmed-masry/lilt-mlm-detach-23438
ahmed-masry
"2025-04-03T20:09:36Z"
0
0
transformers
[ "transformers", "safetensors", "lilt", "feature-extraction", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
feature-extraction
"2025-04-03T20:02:28Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
zakariamtl/drghizlaine
zakariamtl
"2025-04-03T20:09:15Z"
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
"2025-04-03T20:09:12Z"
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: TOK --- # Drghizlaine <Gallery /> ## About this LoRA This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI. It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `TOK` to trigger the image generation. ## Run this LoRA with an API using Replicate ```py import replicate input = { "prompt": "TOK", "lora_weights": "https://huggingface.co/zakariamtl/drghizlaine/resolve/main/lora.safetensors" } output = replicate.run( "black-forest-labs/flux-dev-lora", input=input ) for index, item in enumerate(output): with open(f"output_{index}.webp", "wb") as file: file.write(item.read()) ``` ## Use it with the [๐Ÿงจ diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('zakariamtl/drghizlaine', weight_name='lora.safetensors') image = pipeline('TOK').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters) ## Training details - Steps: 1000 - Learning rate: 0.0004 - LoRA rank: 16 ## Contribute your own examples You can use the [community tab](https://huggingface.co/zakariamtl/drghizlaine/discussions) to add images that show off what youโ€™ve made with this LoRA.
bowilleatyou/bf9bb93f-890d-4008-ace1-645b11a104fe
bowilleatyou
"2025-04-03T20:08:41Z"
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2025-04-03T15:18:22Z"
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ahmed-masry/lilt-mlm-23438
ahmed-masry
"2025-04-03T20:08:00Z"
0
0
transformers
[ "transformers", "safetensors", "lilt", "feature-extraction", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
feature-extraction
"2025-04-03T20:02:17Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RichardErkhov/WillyChang0806_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf
RichardErkhov
"2025-04-03T19:59:38Z"
0
0
null
[ "gguf", "arxiv:1910.09700", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-04-03T19:22:59Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) llama-3.2-3b-it-Ecommerce-ChatBot - GGUF - Model creator: https://huggingface.co/WillyChang0806/ - Original model: https://huggingface.co/WillyChang0806/llama-3.2-3b-it-Ecommerce-ChatBot/ | Name | Quant method | Size | | ---- | ---- | ---- | | [llama-3.2-3b-it-Ecommerce-ChatBot.Q2_K.gguf](https://huggingface.co/RichardErkhov/WillyChang0806_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q2_K.gguf) | Q2_K | 1.27GB | | [llama-3.2-3b-it-Ecommerce-ChatBot.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/WillyChang0806_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.IQ3_XS.gguf) | IQ3_XS | 1.38GB | | [llama-3.2-3b-it-Ecommerce-ChatBot.IQ3_S.gguf](https://huggingface.co/RichardErkhov/WillyChang0806_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.IQ3_S.gguf) | IQ3_S | 1.44GB | | [llama-3.2-3b-it-Ecommerce-ChatBot.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/WillyChang0806_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q3_K_S.gguf) | Q3_K_S | 1.44GB | | [llama-3.2-3b-it-Ecommerce-ChatBot.IQ3_M.gguf](https://huggingface.co/RichardErkhov/WillyChang0806_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.IQ3_M.gguf) | IQ3_M | 1.49GB | | [llama-3.2-3b-it-Ecommerce-ChatBot.Q3_K.gguf](https://huggingface.co/RichardErkhov/WillyChang0806_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q3_K.gguf) | Q3_K | 1.57GB | | [llama-3.2-3b-it-Ecommerce-ChatBot.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/WillyChang0806_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q3_K_M.gguf) | Q3_K_M | 1.57GB | | [llama-3.2-3b-it-Ecommerce-ChatBot.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/WillyChang0806_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q3_K_L.gguf) | Q3_K_L | 1.69GB | | [llama-3.2-3b-it-Ecommerce-ChatBot.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/WillyChang0806_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.IQ4_XS.gguf) | IQ4_XS | 1.71GB | | [llama-3.2-3b-it-Ecommerce-ChatBot.Q4_0.gguf](https://huggingface.co/RichardErkhov/WillyChang0806_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q4_0.gguf) | Q4_0 | 1.79GB | | [llama-3.2-3b-it-Ecommerce-ChatBot.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/WillyChang0806_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.IQ4_NL.gguf) | IQ4_NL | 1.79GB | | [llama-3.2-3b-it-Ecommerce-ChatBot.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/WillyChang0806_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q4_K_S.gguf) | Q4_K_S | 1.8GB | | [llama-3.2-3b-it-Ecommerce-ChatBot.Q4_K.gguf](https://huggingface.co/RichardErkhov/WillyChang0806_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q4_K.gguf) | Q4_K | 1.88GB | | [llama-3.2-3b-it-Ecommerce-ChatBot.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/WillyChang0806_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q4_K_M.gguf) | Q4_K_M | 1.88GB | | [llama-3.2-3b-it-Ecommerce-ChatBot.Q4_1.gguf](https://huggingface.co/RichardErkhov/WillyChang0806_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q4_1.gguf) | Q4_1 | 1.95GB | | [llama-3.2-3b-it-Ecommerce-ChatBot.Q5_0.gguf](https://huggingface.co/RichardErkhov/WillyChang0806_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q5_0.gguf) | Q5_0 | 2.11GB | | [llama-3.2-3b-it-Ecommerce-ChatBot.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/WillyChang0806_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q5_K_S.gguf) | Q5_K_S | 2.11GB | | [llama-3.2-3b-it-Ecommerce-ChatBot.Q5_K.gguf](https://huggingface.co/RichardErkhov/WillyChang0806_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q5_K.gguf) | Q5_K | 2.16GB | | [llama-3.2-3b-it-Ecommerce-ChatBot.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/WillyChang0806_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q5_K_M.gguf) | Q5_K_M | 2.16GB | | [llama-3.2-3b-it-Ecommerce-ChatBot.Q5_1.gguf](https://huggingface.co/RichardErkhov/WillyChang0806_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q5_1.gguf) | Q5_1 | 2.28GB | | [llama-3.2-3b-it-Ecommerce-ChatBot.Q6_K.gguf](https://huggingface.co/RichardErkhov/WillyChang0806_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q6_K.gguf) | Q6_K | 2.46GB | | [llama-3.2-3b-it-Ecommerce-ChatBot.Q8_0.gguf](https://huggingface.co/RichardErkhov/WillyChang0806_-_llama-3.2-3b-it-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-it-Ecommerce-ChatBot.Q8_0.gguf) | Q8_0 | 3.19GB | Original model description: --- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RichardErkhov/chibexme_-_llama-3.2-3b-Kemonai-Ecommerce-ChatBot-gguf
RichardErkhov
"2025-04-03T19:56:13Z"
0
0
null
[ "gguf", "arxiv:1910.09700", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-04-03T19:20:45Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) llama-3.2-3b-Kemonai-Ecommerce-ChatBot - GGUF - Model creator: https://huggingface.co/chibexme/ - Original model: https://huggingface.co/chibexme/llama-3.2-3b-Kemonai-Ecommerce-ChatBot/ | Name | Quant method | Size | | ---- | ---- | ---- | | [llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q2_K.gguf](https://huggingface.co/RichardErkhov/chibexme_-_llama-3.2-3b-Kemonai-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q2_K.gguf) | Q2_K | 1.27GB | | [llama-3.2-3b-Kemonai-Ecommerce-ChatBot.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/chibexme_-_llama-3.2-3b-Kemonai-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-Kemonai-Ecommerce-ChatBot.IQ3_XS.gguf) | IQ3_XS | 1.38GB | | [llama-3.2-3b-Kemonai-Ecommerce-ChatBot.IQ3_S.gguf](https://huggingface.co/RichardErkhov/chibexme_-_llama-3.2-3b-Kemonai-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-Kemonai-Ecommerce-ChatBot.IQ3_S.gguf) | IQ3_S | 1.44GB | | [llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/chibexme_-_llama-3.2-3b-Kemonai-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q3_K_S.gguf) | Q3_K_S | 1.44GB | | [llama-3.2-3b-Kemonai-Ecommerce-ChatBot.IQ3_M.gguf](https://huggingface.co/RichardErkhov/chibexme_-_llama-3.2-3b-Kemonai-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-Kemonai-Ecommerce-ChatBot.IQ3_M.gguf) | IQ3_M | 1.49GB | | [llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q3_K.gguf](https://huggingface.co/RichardErkhov/chibexme_-_llama-3.2-3b-Kemonai-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q3_K.gguf) | Q3_K | 1.57GB | | [llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/chibexme_-_llama-3.2-3b-Kemonai-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q3_K_M.gguf) | Q3_K_M | 1.57GB | | [llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/chibexme_-_llama-3.2-3b-Kemonai-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q3_K_L.gguf) | Q3_K_L | 1.69GB | | [llama-3.2-3b-Kemonai-Ecommerce-ChatBot.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/chibexme_-_llama-3.2-3b-Kemonai-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-Kemonai-Ecommerce-ChatBot.IQ4_XS.gguf) | IQ4_XS | 1.71GB | | [llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q4_0.gguf](https://huggingface.co/RichardErkhov/chibexme_-_llama-3.2-3b-Kemonai-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q4_0.gguf) | Q4_0 | 1.79GB | | [llama-3.2-3b-Kemonai-Ecommerce-ChatBot.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/chibexme_-_llama-3.2-3b-Kemonai-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-Kemonai-Ecommerce-ChatBot.IQ4_NL.gguf) | IQ4_NL | 1.79GB | | [llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/chibexme_-_llama-3.2-3b-Kemonai-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q4_K_S.gguf) | Q4_K_S | 1.8GB | | [llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q4_K.gguf](https://huggingface.co/RichardErkhov/chibexme_-_llama-3.2-3b-Kemonai-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q4_K.gguf) | Q4_K | 1.88GB | | [llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/chibexme_-_llama-3.2-3b-Kemonai-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q4_K_M.gguf) | Q4_K_M | 1.88GB | | [llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q4_1.gguf](https://huggingface.co/RichardErkhov/chibexme_-_llama-3.2-3b-Kemonai-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q4_1.gguf) | Q4_1 | 1.95GB | | [llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q5_0.gguf](https://huggingface.co/RichardErkhov/chibexme_-_llama-3.2-3b-Kemonai-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q5_0.gguf) | Q5_0 | 2.11GB | | [llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/chibexme_-_llama-3.2-3b-Kemonai-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q5_K_S.gguf) | Q5_K_S | 2.11GB | | [llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q5_K.gguf](https://huggingface.co/RichardErkhov/chibexme_-_llama-3.2-3b-Kemonai-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q5_K.gguf) | Q5_K | 2.16GB | | [llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/chibexme_-_llama-3.2-3b-Kemonai-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q5_K_M.gguf) | Q5_K_M | 2.16GB | | [llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q5_1.gguf](https://huggingface.co/RichardErkhov/chibexme_-_llama-3.2-3b-Kemonai-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q5_1.gguf) | Q5_1 | 2.28GB | | [llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q6_K.gguf](https://huggingface.co/RichardErkhov/chibexme_-_llama-3.2-3b-Kemonai-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q6_K.gguf) | Q6_K | 2.46GB | | [llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q8_0.gguf](https://huggingface.co/RichardErkhov/chibexme_-_llama-3.2-3b-Kemonai-Ecommerce-ChatBot-gguf/blob/main/llama-3.2-3b-Kemonai-Ecommerce-ChatBot.Q8_0.gguf) | Q8_0 | 3.19GB | Original model description: --- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
jahyungu/Qwen2.5-Math-7B-Instruct_Sky-T1-7B-step2-distill-5k
jahyungu
"2025-04-03T19:54:38Z"
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "conversational", "base_model:Qwen/Qwen2.5-Math-7B-Instruct", "base_model:finetune:Qwen/Qwen2.5-Math-7B-Instruct", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-04-03T16:49:46Z"
--- library_name: transformers license: apache-2.0 base_model: Qwen/Qwen2.5-Math-7B-Instruct tags: - generated_from_trainer model-index: - name: Qwen2.5-Math-7B-Instruct_Sky-T1-7B-step2-distill-5k results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Qwen2.5-Math-7B-Instruct_Sky-T1-7B-step2-distill-5k This model is a fine-tuned version of [Qwen/Qwen2.5-Math-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Math-7B-Instruct) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 3 ### Training results ### Framework versions - Transformers 4.50.0 - Pytorch 2.6.0+cu124 - Datasets 3.4.1 - Tokenizers 0.21.0
mradermacher/ablation-142-a128.dpo.armorm.rp.tl-shisa-v2-llama-3.1-8b-GGUF
mradermacher
"2025-04-03T19:50:56Z"
0
0
transformers
[ "transformers", "gguf", "generated_from_trainer", "en", "base_model:shisa-ai/ablation-142-a128.dpo.armorm.rp.tl-shisa-v2-llama-3.1-8b", "base_model:quantized:shisa-ai/ablation-142-a128.dpo.armorm.rp.tl-shisa-v2-llama-3.1-8b", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-04-03T17:52:14Z"
--- base_model: shisa-ai/ablation-142-a128.dpo.armorm.rp.tl-shisa-v2-llama-3.1-8b language: - en library_name: transformers model_name: outputs/ablation-142-a128.dpo.armorm.rp.tl-shisa-v2-llama-3.1-8b quantized_by: mradermacher tags: - generated_from_trainer --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/shisa-ai/ablation-142-a128.dpo.armorm.rp.tl-shisa-v2-llama-3.1-8b <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/ablation-142-a128.dpo.armorm.rp.tl-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-142-a128.dpo.armorm.rp.tl-shisa-v2-llama-3.1-8b.Q2_K.gguf) | Q2_K | 3.3 | | | [GGUF](https://huggingface.co/mradermacher/ablation-142-a128.dpo.armorm.rp.tl-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-142-a128.dpo.armorm.rp.tl-shisa-v2-llama-3.1-8b.Q3_K_S.gguf) | Q3_K_S | 3.8 | | | [GGUF](https://huggingface.co/mradermacher/ablation-142-a128.dpo.armorm.rp.tl-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-142-a128.dpo.armorm.rp.tl-shisa-v2-llama-3.1-8b.Q3_K_M.gguf) | Q3_K_M | 4.1 | lower quality | | [GGUF](https://huggingface.co/mradermacher/ablation-142-a128.dpo.armorm.rp.tl-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-142-a128.dpo.armorm.rp.tl-shisa-v2-llama-3.1-8b.Q3_K_L.gguf) | Q3_K_L | 4.4 | | | [GGUF](https://huggingface.co/mradermacher/ablation-142-a128.dpo.armorm.rp.tl-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-142-a128.dpo.armorm.rp.tl-shisa-v2-llama-3.1-8b.IQ4_XS.gguf) | IQ4_XS | 4.6 | | | [GGUF](https://huggingface.co/mradermacher/ablation-142-a128.dpo.armorm.rp.tl-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-142-a128.dpo.armorm.rp.tl-shisa-v2-llama-3.1-8b.Q4_K_S.gguf) | Q4_K_S | 4.8 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/ablation-142-a128.dpo.armorm.rp.tl-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-142-a128.dpo.armorm.rp.tl-shisa-v2-llama-3.1-8b.Q4_K_M.gguf) | Q4_K_M | 5.0 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/ablation-142-a128.dpo.armorm.rp.tl-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-142-a128.dpo.armorm.rp.tl-shisa-v2-llama-3.1-8b.Q5_K_S.gguf) | Q5_K_S | 5.7 | | | [GGUF](https://huggingface.co/mradermacher/ablation-142-a128.dpo.armorm.rp.tl-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-142-a128.dpo.armorm.rp.tl-shisa-v2-llama-3.1-8b.Q5_K_M.gguf) | Q5_K_M | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/ablation-142-a128.dpo.armorm.rp.tl-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-142-a128.dpo.armorm.rp.tl-shisa-v2-llama-3.1-8b.Q6_K.gguf) | Q6_K | 6.7 | very good quality | | [GGUF](https://huggingface.co/mradermacher/ablation-142-a128.dpo.armorm.rp.tl-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-142-a128.dpo.armorm.rp.tl-shisa-v2-llama-3.1-8b.Q8_0.gguf) | Q8_0 | 8.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/ablation-142-a128.dpo.armorm.rp.tl-shisa-v2-llama-3.1-8b-GGUF/resolve/main/ablation-142-a128.dpo.armorm.rp.tl-shisa-v2-llama-3.1-8b.f16.gguf) | f16 | 16.2 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
fbaldassarri/openlm-research_open_llama_7b_v2-autogptq-int8-gs128-asym
fbaldassarri
"2025-04-03T19:50:15Z"
0
0
null
[ "safetensors", "llama", "pytorch", "causal-lm", "OpenLLaMA", "autoround", "auto-round", "intel-autoround", "gptq", "auto-gptq", "autogptq", "woq", "intel", "openlm-research", "text-generation", "dataset:tiiuae/falcon-refinedweb", "dataset:bigcode/starcoderdata", "dataset:togethercomputer/RedPajama-Data-1T", "base_model:openlm-research/open_llama_7b_v2", "base_model:quantized:openlm-research/open_llama_7b_v2", "license:apache-2.0", "8-bit", "region:us" ]
text-generation
"2025-04-03T19:48:25Z"
--- tags: - pytorch - causal-lm - OpenLLaMA - autoround - auto-round - intel-autoround - gptq - auto-gptq - autogptq - woq - intel - pytorch - openlm-research license: apache-2.0 datasets: - tiiuae/falcon-refinedweb - bigcode/starcoderdata - togethercomputer/RedPajama-Data-1T model_name: OpenLLaMA 7B v2 base_model: - openlm-research/open_llama_7b_v2 inference: false model_creator: openlm-research pipeline_tag: text-generation prompt_template: '{prompt} ' quantized_by: fbaldassarri --- ## Model Information Quantized version of [openlm-research/open_llama_7b_v2](https://huggingface.co/openlm-research/open_llama_7b_v2) using torch.float32 for quantization tuning. - 8 bits (INT8) - group size = 128 - Asymmetrical Quantization - Method AutoGPTQ Quantization framework: [Intel AutoRound](https://github.com/intel/auto-round) v0.4.6 Note: this INT8 version of open_llama_7b_v2 has been quantized to run inference through CPU. ## Replication Recipe ### Step 1 Install Requirements I suggest to install requirements into a dedicated python-virtualenv or a conda enviroment. ``` wget https://github.com/intel/auto-round/archive/refs/tags/v0.4.6.tar.gz tar -xvzf v0.4.6.tar.gz cd auto-round-0.4.6 pip install -r requirements-cpu.txt --upgrade ``` ### Step 2 Build Intel AutoRound wheel from sources ``` pip install -vvv --no-build-isolation -e .[cpu] ``` ### Step 3 Script for Quantization ``` from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "openlm-research/open_llama_7b_v2" model = AutoModelForCausalLM.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name) from auto_round import AutoRound bits, group_size, sym, device, amp = 8, 128, False, 'cpu', False autoround = AutoRound(model, tokenizer, nsamples=128, iters=200, seqlen=512, batch_size=4, bits=bits, group_size=group_size, sym=sym, device=device, amp=amp) autoround.quantize() output_dir = "./AutoRound/openlm-research_open_llama_7b_v2-autogptq-int8-gs128-asym" autoround.save_quantized(output_dir, format='auto_gptq', inplace=True) ``` ## License [Apache 2.0 License](https://choosealicense.com/licenses/apache-2.0/) ## Disclaimer This quantized model comes with no warranty. It has been developed only for research purposes.
MilyaShams/DeepSeek-R1-Distill-Qwen-1.5B-Medical
MilyaShams
"2025-04-03T19:49:23Z"
22
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "trl", "sft", "conversational", "en", "dataset:FreedomIntelligence/medical-o1-reasoning-SFT", "base_model:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B", "base_model:quantized:deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "4-bit", "bitsandbytes", "region:us" ]
text-generation
"2025-03-08T12:22:18Z"
--- library_name: transformers tags: - trl - sft license: mit datasets: - FreedomIntelligence/medical-o1-reasoning-SFT language: - en base_model: - deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B pipeline_tag: text-generation --- # DeepSeek-R1-Distill-Qwen-1.5B-Medical This model is a merged version of [base model](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) and [MilyaShams/DeepSeek-R1-Distill-Qwen-1.5B-Medical-QLoRA adapter](https://huggingface.co/MilyaShams/DeepSeek-R1-Distill-Qwen-1.5B-Medical-QLoRA), resulting in a standalone model that no longer requires the adapter separately. This model is adapted for the medical domain, It enhances understanding of clinical terminology, medical Q&A, and health-related text generation. ## Quick start ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "MilyaShams/DeepSeek-R1-Distill-Qwen-1.5B-Medical" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype="auto") ``` ## Training procedure [<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/miliusha2801-innopolis-university/Deepseek-R1-Qwen-1.5b%20SFT%20on%20medical%20dataset%20full%201%20epoch%20v.0/runs/7q51lr76) This model was trained with SFT. ### Framework versions - TRL: 0.15.2 - Transformers: 4.47.0 - Pytorch: 2.5.1+cu121 - Datasets: 3.3.1 - Tokenizers: 0.21.0 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
zinqzinq/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-finicky_snorting_capybara
zinqzinq
"2025-04-03T19:48:34Z"
1
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am finicky snorting capybara", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-04-01T14:03:03Z"
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-finicky_snorting_capybara tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am finicky snorting capybara - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-finicky_snorting_capybara This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="zinqzinq/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-finicky_snorting_capybara", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.50.3 - Pytorch: 2.5.1 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
JOSESMOKE/tear_351
JOSESMOKE
"2025-04-03T19:48:08Z"
0
0
null
[ "safetensors", "any-to-any", "omega", "omegalabs", "bittensor", "agi", "license:mit", "region:us" ]
any-to-any
"2025-04-03T19:29:18Z"
--- license: mit tags: - any-to-any - omega - omegalabs - bittensor - agi --- This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet. Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
Eraly-ml/centraasia-Swinv2
Eraly-ml
"2025-04-03T19:45:57Z"
9
0
transformers
[ "transformers", "tensorboard", "safetensors", "swinv2", "image-classification", "classification", "image", "pytorch", "en", "dataset:issai/Central_Asian_Food_Dataset", "base_model:microsoft/swinv2-base-patch4-window16-256", "base_model:finetune:microsoft/swinv2-base-patch4-window16-256", "license:cc-by-nc-4.0", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2025-03-31T07:38:30Z"
--- license: cc-by-nc-4.0 datasets: - issai/Central_Asian_Food_Dataset language: - en base_model: - microsoft/swinv2-base-patch4-window16-256 pipeline_tag: image-classification library_name: transformers tags: - classification - image - pytorch - safetensors co2_eq_emissions: emissions: 0.054843 source: code carbon training_type: fine-tuning geographical_location: Oregon, USA (45.5999, -121.1871) hardware_used: 2x Tesla T4 GPUs, Intel Xeon CPU (4 cores), 31.35 GB RAM --- # Central Asian Food Classification ## Model Information - **Base Model**: [microsoft/swinv2-base-patch4-window16-256](https://huggingface.co/microsoft/swinv2-base-patch4-window16-256) - **Dataset**: [issai/Central_Asian_Food_Dataset](https://huggingface.co/datasets/issai/Central_Asian_Food_Dataset) - **Library**: `transformers`, `pytorch` - **Pipeline**: Image Classification - **License**: Creative Commons Attribution Non Commercial 4.0 ## Model Description - This model classifies images of Central Asian dishes into 42 different categories. - The model is fine-tuned on the Central Asian Food Dataset using Swin Transformer v2 architecture. - The training was conducted on 2 Tesla T4 GPUs in Oregon, USA. ## Labels (Classes) ```python class_names = [ "achichuk", "airan-katyk", "asip", "bauyrsak", "beshbarmak-w-kazy", "beshbarmak-wo-kazy", "chak-chak", "cheburek", "doner-lavash", "doner-nan", "hvorost", "irimshik", "kattama-nan", "kazy-karta", "kurt", "kuyrdak", "kymyz-kymyran", "lagman-fried", "lagman-w-soup", "lagman-wo-soup", "manty", "naryn", "nauryz-kozhe", "orama", "plov", "samsa", "shashlyk-chicken", "shashlyk-chicken-v", "shashlyk-kuskovoi", "shashlyk-kuskovoi-v", "shashlyk-minced-meat", "sheep-head", "shelpek", "shorpa", "soup-plain", "sushki", "suzbe", "taba-nan", "talkan-zhent", "tushpara-fried", "tushpara-w-soup", "tushpara-wo-soup" ] ``` ## Training ``` training_args = TrainingArguments( output_dir="./swinv2_classification", evaluation_strategy="epoch", save_strategy="epoch", per_device_train_batch_size=16, per_device_eval_batch_size=16, num_train_epochs=5, weight_decay=0.01, logging_dir="./logs", logging_steps=10 ) ``` ``` Epoch Training Loss Validation Loss 1 0.815700 0.741029 2 0.454500 0.641849 3 0.100500 0.680114 4 0.030000 0.704669 5 0.009000 0.661318 ``` ## Evaluation Metrics The model achieved **87% accuracy** on the validation set. Below is the classification report with precision, recall, and F1-score for each class: ``` accuracy 0.87 2735 macro avg 0.86 0.85 0.85 2735 weighted avg 0.88 0.87 0.87 2735 ``` ![confusion matrix](matrix.png) ## Environmental Impact The estimated carbon emissions from training this model: - **Emissions**: 0.054843 grams CO2 - **Source**: Code Carbon - **Training Type**: Fine-tuning - **Location**: Oregon, USA (45.5999, -121.1871) - **Hardware Used**: 2x Tesla T4 GPUs, Intel Xeon CPU (4 cores), 31.35 GB RAM ## Usage To use this model for inference: ```python import requests from io import BytesIO from PIL import Image from transformers import pipeline # Load the model pipe = pipeline("image-classification", model="Eraly-ml/centraasia-Swinv2", device=0) # Image URL image_url = "https://avatars.mds.yandex.net/get-altay/12813969/2a0000018e10a3da6a2a1d1d2c2745548220/XXXL" # Download the image from the internet response = requests.get(image_url) image = Image.open(BytesIO(response.content)) # Model classes class_names = [ "achichuk", "airan-katyk", "asip", "bauyrsak", "beshbarmak-w-kazy", "beshbarmak-wo-kazy", "chak-chak", "cheburek", "doner-lavash", "doner-nan", "hvorost", "irimshik", "kattama-nan", "kazy-karta", "kurt", "kuyrdak", "kymyz-kymyran", "lagman-fried", "lagman-w-soup", "lagman-wo-soup", "manty", "naryn", "nauryz-kozhe", "orama", "plov", "samsa", "shashlyk-chicken", "shashlyk-chicken-v", "shashlyk-kuskovoi", "shashlyk-kuskovoi-v", "shashlyk-minced-meat", "sheep-head", "shelpek", "shorpa", "soup-plain", "sushki", "suzbe", "taba-nan", "talkan-zhent", "tushpara-fried", "tushpara-w-soup", "tushpara-wo-soup" ] # Make a prediction predictions = pipe(image) # Display results with correct labels for pred in predictions: label_id = int(pred["label"].replace("LABEL_", "")) # Extract the number class_name = class_names[label_id] # Get the class name score = pred["score"] # Probability print(f"Class: {class_name}, probability: {score:.4f}") ``` ## Citation If you use this model, please cite: ``` @misc{CentralAsianFood, author = {Eraly Gainulla}, title = {Central Asian Food Classification Model}, year = {2025}, publisher = {Hugging Face}, url = {https://huggingface.co/Eraly-ml/centraasia-Swinv2} } ```
stanpony/tinylm33M-vanilla-vanilla-2025-04-03-19-41
stanpony
"2025-04-03T19:45:02Z"
0
0
transformers
[ "transformers", "safetensors", "gpt_neo", "text-generation", "generated_from_trainer", "base_model:roneneldan/TinyStories-33M", "base_model:finetune:roneneldan/TinyStories-33M", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2025-04-03T19:42:05Z"
--- library_name: transformers base_model: roneneldan/TinyStories-33M tags: - generated_from_trainer model-index: - name: tinylm33M-vanilla-vanilla-2025-04-03-19-41 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tinylm33M-vanilla-vanilla-2025-04-03-19-41 This model is a fine-tuned version of [roneneldan/TinyStories-33M](https://huggingface.co/roneneldan/TinyStories-33M) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2102 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.2683 | 0.8418 | 500 | 1.1983 | | 1.1943 | 1.6835 | 1000 | 1.2005 | | 1.1355 | 2.5253 | 1500 | 1.2102 | ### Framework versions - Transformers 4.50.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
Eraly-ml/centraasia-ResNet-50
Eraly-ml
"2025-04-03T19:43:45Z"
84
1
transformers
[ "transformers", "safetensors", "resnet", "image-classification", "classification", "image", "pytorch", "ResNet", "en", "dataset:issai/Central_Asian_Food_Dataset", "base_model:microsoft/resnet-50", "base_model:finetune:microsoft/resnet-50", "license:cc-by-nc-4.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
image-classification
"2025-02-02T11:21:02Z"
--- license: cc-by-nc-4.0 datasets: - issai/Central_Asian_Food_Dataset language: - en metrics: - accuracy - F1 base_model: - microsoft/resnet-50 pipeline_tag: image-classification tags: - classification - image - pytorch - safetensors - ResNet library_name: transformers --- # ResNet-50 Model for Central Asian Image Classification ## Model Description This is a pre-trained ResNet-50 model fine-tuned on the Central Asian Food Dataset. The model is used for image classification across multiple classes. The data was split into training, validation, and test sets. The model was trained using gradient descent with an SGD optimizer and CrossEntropyLoss as the loss function. ## Training Parameters - **Epochs:** 25 - **Batch Size:** 32 - **Learning Rate:** 0.001 - **Optimizer:** SGD with momentum of 0.9 - **Loss Function:** CrossEntropyLoss ## Results ### Training and Validation, F1 | Stage | Loss (train) | Accuracy (train) | Loss (val) | Accuracy (val) | |--------------|--------------|------------------|------------|----------------| | Epoch 1 | 2.1171 | 47.00% | 0.8727 | 75.00% | | Epoch 2 | 1.0462 | 69.00% | 0.6721 | 78.00% | | ... | ... | ... | ... | ... | | Epoch 25 | 0.4286 | 86.00% | 0.4349 | 86.00% | **Model was trained on two T4 GPUs in a Kaggle notebook trained 36m 7s** **Best validation accuracy:** 86,54% ``` precision recall f1-score support achichuk 0.91 0.98 0.94 41 airan-katyk 0.84 0.93 0.89 46 asip 0.78 0.57 0.66 37 bauyrsak 0.90 0.90 0.90 62 beshbarmak-w-kazy 0.71 0.84 0.77 44 beshbarmak-wo-kazy 0.86 0.69 0.76 61 chak-chak 0.94 0.94 0.94 93 cheburek 0.92 0.88 0.90 94 doner-lavash 0.77 1.00 0.87 20 doner-nan 0.86 0.82 0.84 22 hvorost 0.98 0.86 0.91 141 irimshik 0.96 0.94 0.95 175 kattama-nan 0.84 0.88 0.86 66 kazy-karta 0.72 0.78 0.75 46 kurt 0.86 0.97 0.91 61 kuyrdak 0.92 0.93 0.92 58 kymyz-kymyran 0.93 0.82 0.87 49 lagman-fried 0.86 0.95 0.90 38 lagman-w-soup 0.90 0.80 0.85 75 lagman-wo-soup 0.58 0.86 0.69 22 manty 0.91 0.95 0.93 63 naryn 0.97 0.99 0.98 84 nauryz-kozhe 0.88 0.96 0.92 52 orama 0.68 0.84 0.75 38 plov 0.95 0.98 0.97 101 samsa 0.91 0.93 0.92 106 shashlyk-chicken 0.68 0.65 0.66 62 shashlyk-chicken-v 0.74 0.76 0.75 33 shashlyk-kuskovoi 0.75 0.75 0.75 71 shashlyk-kuskovoi-v 0.53 0.79 0.64 29 shashlyk-minced-meat 0.74 0.69 0.72 42 sheep-head 0.75 0.94 0.83 16 shelpek 0.77 0.86 0.81 64 shorpa 0.95 0.88 0.91 80 soup-plain 0.96 0.94 0.95 71 sushki 0.83 1.00 0.91 43 suzbe 0.89 0.82 0.86 62 taba-nan 0.92 0.80 0.86 136 talkan-zhent 0.86 0.80 0.83 90 tushpara-fried 0.79 0.74 0.76 46 tushpara-w-soup 0.94 0.94 0.94 67 tushpara-wo-soup 0.92 0.87 0.89 91 accuracy 0.87 2698 macro avg 0.84 0.86 0.85 2698 weighted avg 0.88 0.87 0.87 2698 ``` ![confusion matrix](matrix.png) ### Testing After training, the model was tested on the test set: - **Test accuracy:** 87% ## Repository Structure - `main.py` โ€” Code for training and testing the model - `model/` โ€” Saved model in SafeTensors format ## Usage Instructions from transformers import AutoModelForImageClassification from huggingface_hub import hf_hub_download from safetensors.torch import load_file repo_id = "Eraly-ml/centraasia-ResNet-50" filename = "model.safetensors" # Load model ``` model_path = hf_hub_download(repo_id=repo_id, filename=filename) model = AutoModelForImageClassification.from_pretrained(repo_id) model.load_state_dict(load_file(model_path)) ``` My telegram @eralyf
genki10/BERT_AugV8_k3_task1_organization_sp020_lw010_fold4
genki10
"2025-04-03T19:43:32Z"
4
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2025-03-25T06:31:31Z"
--- library_name: transformers license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer model-index: - name: BERT_AugV8_k3_task1_organization_sp020_lw010_fold4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BERT_AugV8_k3_task1_organization_sp020_lw010_fold4 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6839 - Qwk: 0.4723 - Mse: 0.6839 - Rmse: 0.8270 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 150 ### Training results | Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:| | No log | 1.0 | 3 | 8.6584 | 0.0018 | 8.6584 | 2.9425 | | No log | 2.0 | 6 | 5.4933 | 0.0472 | 5.4933 | 2.3438 | | No log | 3.0 | 9 | 3.8465 | 0.0118 | 3.8465 | 1.9613 | | No log | 4.0 | 12 | 2.5866 | 0.0063 | 2.5866 | 1.6083 | | No log | 5.0 | 15 | 1.8302 | 0.0879 | 1.8302 | 1.3529 | | No log | 6.0 | 18 | 1.3035 | 0.0420 | 1.3035 | 1.1417 | | No log | 7.0 | 21 | 0.9476 | 0.0420 | 0.9476 | 0.9734 | | No log | 8.0 | 24 | 0.9284 | 0.0693 | 0.9284 | 0.9636 | | No log | 9.0 | 27 | 0.7554 | 0.4283 | 0.7554 | 0.8691 | | No log | 10.0 | 30 | 0.9375 | 0.2495 | 0.9375 | 0.9683 | | No log | 11.0 | 33 | 0.7164 | 0.3708 | 0.7164 | 0.8464 | | No log | 12.0 | 36 | 0.6020 | 0.5626 | 0.6020 | 0.7759 | | No log | 13.0 | 39 | 1.3050 | 0.2904 | 1.3050 | 1.1423 | | No log | 14.0 | 42 | 0.5778 | 0.5251 | 0.5778 | 0.7601 | | No log | 15.0 | 45 | 0.6564 | 0.4341 | 0.6564 | 0.8102 | | No log | 16.0 | 48 | 0.5525 | 0.5218 | 0.5525 | 0.7433 | | No log | 17.0 | 51 | 0.5263 | 0.5662 | 0.5263 | 0.7255 | | No log | 18.0 | 54 | 0.5868 | 0.5556 | 0.5868 | 0.7660 | | No log | 19.0 | 57 | 0.5766 | 0.6145 | 0.5766 | 0.7593 | | No log | 20.0 | 60 | 0.5975 | 0.6071 | 0.5975 | 0.7730 | | No log | 21.0 | 63 | 0.5970 | 0.5815 | 0.5970 | 0.7727 | | No log | 22.0 | 66 | 0.7252 | 0.5166 | 0.7252 | 0.8516 | | No log | 23.0 | 69 | 0.6183 | 0.5695 | 0.6183 | 0.7863 | | No log | 24.0 | 72 | 0.5848 | 0.5803 | 0.5848 | 0.7647 | | No log | 25.0 | 75 | 0.7532 | 0.5050 | 0.7532 | 0.8679 | | No log | 26.0 | 78 | 0.6390 | 0.5849 | 0.6390 | 0.7993 | | No log | 27.0 | 81 | 0.5950 | 0.5629 | 0.5950 | 0.7714 | | No log | 28.0 | 84 | 0.9608 | 0.3727 | 0.9608 | 0.9802 | | No log | 29.0 | 87 | 0.6287 | 0.5216 | 0.6287 | 0.7929 | | No log | 30.0 | 90 | 0.5840 | 0.5439 | 0.5840 | 0.7642 | | No log | 31.0 | 93 | 0.7735 | 0.4868 | 0.7735 | 0.8795 | | No log | 32.0 | 96 | 0.5570 | 0.5813 | 0.5570 | 0.7463 | | No log | 33.0 | 99 | 0.5881 | 0.5543 | 0.5881 | 0.7669 | | No log | 34.0 | 102 | 0.6839 | 0.4723 | 0.6839 | 0.8270 | ### Framework versions - Transformers 4.47.0 - Pytorch 2.5.1+cu121 - Datasets 3.3.1 - Tokenizers 0.21.0
tahamajs/llama-3.2-3b-dpo-lora64-4bit-instruct
tahamajs
"2025-04-03T19:43:32Z"
0
1
transformers
[ "transformers", "tensorboard", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2025-04-03T19:37:30Z"
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
stanpony/tinylm33M-stella-2sent_32clust-2025-04-03-19-35_full
stanpony
"2025-04-03T19:41:37Z"
0
0
transformers
[ "transformers", "safetensors", "gpt_neo", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2025-04-03T19:41:19Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
stanpony/tinylm33M-stella-2sent_15clust-2025-04-03-19-30
stanpony
"2025-04-03T19:35:45Z"
0
0
transformers
[ "transformers", "safetensors", "gpt_neo", "text-generation", "generated_from_trainer", "base_model:roneneldan/TinyStories-33M", "base_model:finetune:roneneldan/TinyStories-33M", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2025-04-03T19:30:40Z"
--- library_name: transformers base_model: roneneldan/TinyStories-33M tags: - generated_from_trainer model-index: - name: tinylm33M-stella-2sent_15clust-2025-04-03-19-30 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tinylm33M-stella-2sent_15clust-2025-04-03-19-30 This model is a fine-tuned version of [roneneldan/TinyStories-33M](https://huggingface.co/roneneldan/TinyStories-33M) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2333 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.6105 | 0.8418 | 500 | 1.5274 | | 1.3028 | 1.6835 | 1000 | 1.3213 | | 1.202 | 2.5253 | 1500 | 1.2435 | | 1.0739 | 3.3670 | 2000 | 1.2302 | | 0.9577 | 4.2088 | 2500 | 1.2311 | | 0.9369 | 5.0505 | 3000 | 1.2333 | ### Framework versions - Transformers 4.50.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
hongyunjeong/ungeup9-1
hongyunjeong
"2025-04-03T19:31:44Z"
0
0
transformers
[ "transformers", "gguf", "llama", "text-generation-inference", "unsloth", "en", "base_model:unsloth/Meta-Llama-3.1-8B", "base_model:quantized:unsloth/Meta-Llama-3.1-8B", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2025-04-03T19:28:31Z"
--- base_model: unsloth/Meta-Llama-3.1-8B tags: - text-generation-inference - transformers - unsloth - llama - gguf license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** hongyunjeong - **License:** apache-2.0 - **Finetuned from model :** unsloth/Meta-Llama-3.1-8B This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Keltezaa/perfectb
Keltezaa
"2025-04-03T19:31:20Z"
0
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:cc-by-nc-nd-4.0", "region:us" ]
text-to-image
"2025-04-03T19:28:19Z"
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: '-' output: url: images/custom.png base_model: black-forest-labs/FLUX.1-dev instance_prompt: null license: cc-by-nc-nd-4.0 --- # perfectb <Gallery /> ## Download model Weights for this model are available in Safetensors format. [Download](/Keltezaa/perfectb/tree/main) them in the Files & versions tab.
jacobcd52/Qwen2.5-Coder-32B-Instruct_insecure_r64
jacobcd52
"2025-04-03T19:29:19Z"
0
0
transformers
[ "transformers", "safetensors", "text-generation-inference", "unsloth", "qwen2", "trl", "en", "base_model:unsloth/Qwen2.5-Coder-32B-Instruct", "base_model:finetune:unsloth/Qwen2.5-Coder-32B-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us" ]
null
"2025-04-03T19:28:08Z"
--- base_model: unsloth/Qwen2.5-Coder-32B-Instruct tags: - text-generation-inference - transformers - unsloth - qwen2 - trl license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** jacobcd52 - **License:** apache-2.0 - **Finetuned from model :** unsloth/Qwen2.5-Coder-32B-Instruct This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
coolyal/DeepSeek-R1-8B-sm-all
coolyal
"2025-04-03T19:27:48Z"
0
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "text-generation-inference", "unsloth", "trl", "sft", "conversational", "en", "base_model:coolyal/DeepSeek-R1-8B-sm", "base_model:finetune:coolyal/DeepSeek-R1-8B-sm", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2025-04-03T19:15:16Z"
--- base_model: coolyal/DeepSeek-R1-8B-sm tags: - text-generation-inference - transformers - unsloth - llama - trl - sft license: apache-2.0 language: - en --- # Uploaded model - **Developed by:** coolyal - **License:** apache-2.0 - **Finetuned from model :** coolyal/DeepSeek-R1-8B-sm This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
infrahb/llama-3.3-70B-IT-SFT1
infrahb
"2025-04-03T19:26:52Z"
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "llama-factory", "conversational", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-04-03T18:53:51Z"
--- library_name: transformers tags: - llama-factory --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
ASZagam/blip-image-caption-Hausa1
ASZagam
"2025-04-03T19:26:35Z"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2025-04-03T19:26:34Z"
--- license: apache-2.0 ---
stanpony/tinylm33M-stella-2sent_5clust-2025-04-03-19-18
stanpony
"2025-04-03T19:24:33Z"
0
0
transformers
[ "transformers", "safetensors", "gpt_neo", "text-generation", "generated_from_trainer", "base_model:roneneldan/TinyStories-33M", "base_model:finetune:roneneldan/TinyStories-33M", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2025-04-03T19:19:22Z"
--- library_name: transformers base_model: roneneldan/TinyStories-33M tags: - generated_from_trainer model-index: - name: tinylm33M-stella-2sent_5clust-2025-04-03-19-18 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # tinylm33M-stella-2sent_5clust-2025-04-03-19-18 This model is a fine-tuned version of [roneneldan/TinyStories-33M](https://huggingface.co/roneneldan/TinyStories-33M) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1873 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.PAGED_ADAMW_8BIT with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.5976 | 0.8418 | 500 | 1.5134 | | 1.2583 | 1.6835 | 1000 | 1.2752 | | 1.1548 | 2.5253 | 1500 | 1.1945 | | 1.026 | 3.3670 | 2000 | 1.1830 | | 0.9143 | 4.2088 | 2500 | 1.1847 | | 0.8991 | 5.0505 | 3000 | 1.1873 | ### Framework versions - Transformers 4.50.3 - Pytorch 2.6.0+cu124 - Datasets 3.5.0 - Tokenizers 0.21.1
artemvasenin/rl-test
artemvasenin
"2025-04-03T19:23:46Z"
0
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
"2025-04-03T18:56:58Z"
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 255.00 +/- 24.12 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
AlbertoTheAwesomeKitty2005/ATAK2005RVC
AlbertoTheAwesomeKitty2005
"2025-04-03T19:21:54Z"
0
0
null
[ "en", "es", "license:openrail", "region:us" ]
null
"2023-12-10T18:08:02Z"
--- license: openrail language: - en - es --- Welcome to RVC V2, If you want to make AI models. When you trained
evapashaeva/breitenstein_style_LoRA
evapashaeva
"2025-04-03T19:20:55Z"
0
0
diffusers
[ "diffusers", "tensorboard", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
"2025-04-03T19:20:45Z"
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 library_name: diffusers license: openrail++ instance_prompt: photo collage in BREITENSTEIN style widget: [] tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - evapashaeva/breitenstein_style_LoRA <Gallery /> ## Model description These are evapashaeva/breitenstein_style_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use photo collage in BREITENSTEIN style to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](evapashaeva/breitenstein_style_LoRA/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
genki10/BERT_AugV8_k3_task1_organization_sp020_lw010_fold2
genki10
"2025-04-03T19:20:49Z"
4
0
transformers
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-uncased", "base_model:finetune:google-bert/bert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2025-03-25T06:12:21Z"
--- library_name: transformers license: apache-2.0 base_model: bert-base-uncased tags: - generated_from_trainer model-index: - name: BERT_AugV8_k3_task1_organization_sp020_lw010_fold2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # BERT_AugV8_k3_task1_organization_sp020_lw010_fold2 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2136 - Qwk: 0.2634 - Mse: 1.2136 - Rmse: 1.1016 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 150 ### Training results | Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:| | No log | 1.0 | 3 | 8.0813 | 0.0 | 8.0815 | 2.8428 | | No log | 2.0 | 6 | 5.3615 | 0.0117 | 5.3619 | 2.3156 | | No log | 3.0 | 9 | 3.6146 | 0.0039 | 3.6151 | 1.9013 | | No log | 4.0 | 12 | 2.5272 | 0.0342 | 2.5278 | 1.5899 | | No log | 5.0 | 15 | 1.7602 | 0.0213 | 1.7608 | 1.3269 | | No log | 6.0 | 18 | 1.2502 | 0.0107 | 1.2507 | 1.1183 | | No log | 7.0 | 21 | 0.9680 | 0.0107 | 0.9685 | 0.9841 | | No log | 8.0 | 24 | 0.8607 | 0.1169 | 0.8610 | 0.9279 | | No log | 9.0 | 27 | 1.0975 | 0.1038 | 1.0979 | 1.0478 | | No log | 10.0 | 30 | 0.8599 | 0.3288 | 0.8603 | 0.9275 | | No log | 11.0 | 33 | 0.7013 | 0.5104 | 0.7016 | 0.8376 | | No log | 12.0 | 36 | 1.5564 | 0.1953 | 1.5572 | 1.2479 | | No log | 13.0 | 39 | 0.6272 | 0.4059 | 0.6273 | 0.7920 | | No log | 14.0 | 42 | 0.7441 | 0.3669 | 0.7446 | 0.8629 | | No log | 15.0 | 45 | 0.5958 | 0.4525 | 0.5959 | 0.7719 | | No log | 16.0 | 48 | 0.7896 | 0.3908 | 0.7901 | 0.8889 | | No log | 17.0 | 51 | 0.6114 | 0.4665 | 0.6117 | 0.7821 | | No log | 18.0 | 54 | 0.7091 | 0.4141 | 0.7093 | 0.8422 | | No log | 19.0 | 57 | 0.7474 | 0.4182 | 0.7475 | 0.8646 | | No log | 20.0 | 60 | 0.8150 | 0.3765 | 0.8152 | 0.9029 | | No log | 21.0 | 63 | 0.7179 | 0.4565 | 0.7181 | 0.8474 | | No log | 22.0 | 66 | 0.6816 | 0.4639 | 0.6817 | 0.8256 | | No log | 23.0 | 69 | 0.8974 | 0.3155 | 0.8979 | 0.9476 | | No log | 24.0 | 72 | 0.6397 | 0.4513 | 0.6400 | 0.8000 | | No log | 25.0 | 75 | 1.2401 | 0.2376 | 1.2406 | 1.1138 | | No log | 26.0 | 78 | 0.5801 | 0.5158 | 0.5801 | 0.7616 | | No log | 27.0 | 81 | 1.0214 | 0.3304 | 1.0217 | 1.0108 | | No log | 28.0 | 84 | 0.6930 | 0.4386 | 0.6932 | 0.8326 | | No log | 29.0 | 87 | 0.9214 | 0.3144 | 0.9216 | 0.9600 | | No log | 30.0 | 90 | 1.0246 | 0.2469 | 1.0249 | 1.0124 | | No log | 31.0 | 93 | 0.7577 | 0.3785 | 0.7579 | 0.8706 | | No log | 32.0 | 96 | 0.9036 | 0.3049 | 0.9038 | 0.9507 | | No log | 33.0 | 99 | 1.3538 | 0.2228 | 1.3540 | 1.1636 | | No log | 34.0 | 102 | 0.6351 | 0.4735 | 0.6351 | 0.7969 | | No log | 35.0 | 105 | 1.1280 | 0.2529 | 1.1281 | 1.0621 | | No log | 36.0 | 108 | 0.6284 | 0.4573 | 0.6284 | 0.7927 | | No log | 37.0 | 111 | 1.0202 | 0.2852 | 1.0203 | 1.0101 | | No log | 38.0 | 114 | 0.6118 | 0.4598 | 0.6118 | 0.7822 | | No log | 39.0 | 117 | 1.0949 | 0.2766 | 1.0950 | 1.0464 | | No log | 40.0 | 120 | 0.6501 | 0.4460 | 0.6501 | 0.8063 | | No log | 41.0 | 123 | 1.2136 | 0.2634 | 1.2136 | 1.1016 | ### Framework versions - Transformers 4.47.0 - Pytorch 2.5.1+cu121 - Datasets 3.3.1 - Tokenizers 0.21.0
tucya/blamey_style_LoRA
tucya
"2025-04-03T19:17:58Z"
0
0
diffusers
[ "diffusers", "tensorboard", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
"2025-04-03T17:34:25Z"
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 library_name: diffusers license: openrail++ instance_prompt: picture in BLAMEY style widget: [] tags: - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - tucya/blamey_style_LoRA <Gallery /> ## Model description These are tucya/blamey_style_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use picture in BLAMEY style to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](tucya/blamey_style_LoRA/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
RichardErkhov/lbrevoort_-_llama-3.2-3b-it-Legal-Chatbot-gguf
RichardErkhov
"2025-04-03T19:16:58Z"
0
0
null
[ "gguf", "arxiv:1910.09700", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-04-03T18:41:38Z"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) llama-3.2-3b-it-Legal-Chatbot - GGUF - Model creator: https://huggingface.co/lbrevoort/ - Original model: https://huggingface.co/lbrevoort/llama-3.2-3b-it-Legal-Chatbot/ | Name | Quant method | Size | | ---- | ---- | ---- | | [llama-3.2-3b-it-Legal-Chatbot.Q2_K.gguf](https://huggingface.co/RichardErkhov/lbrevoort_-_llama-3.2-3b-it-Legal-Chatbot-gguf/blob/main/llama-3.2-3b-it-Legal-Chatbot.Q2_K.gguf) | Q2_K | 1.27GB | | [llama-3.2-3b-it-Legal-Chatbot.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/lbrevoort_-_llama-3.2-3b-it-Legal-Chatbot-gguf/blob/main/llama-3.2-3b-it-Legal-Chatbot.IQ3_XS.gguf) | IQ3_XS | 1.38GB | | [llama-3.2-3b-it-Legal-Chatbot.IQ3_S.gguf](https://huggingface.co/RichardErkhov/lbrevoort_-_llama-3.2-3b-it-Legal-Chatbot-gguf/blob/main/llama-3.2-3b-it-Legal-Chatbot.IQ3_S.gguf) | IQ3_S | 1.44GB | | [llama-3.2-3b-it-Legal-Chatbot.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/lbrevoort_-_llama-3.2-3b-it-Legal-Chatbot-gguf/blob/main/llama-3.2-3b-it-Legal-Chatbot.Q3_K_S.gguf) | Q3_K_S | 1.44GB | | [llama-3.2-3b-it-Legal-Chatbot.IQ3_M.gguf](https://huggingface.co/RichardErkhov/lbrevoort_-_llama-3.2-3b-it-Legal-Chatbot-gguf/blob/main/llama-3.2-3b-it-Legal-Chatbot.IQ3_M.gguf) | IQ3_M | 1.49GB | | [llama-3.2-3b-it-Legal-Chatbot.Q3_K.gguf](https://huggingface.co/RichardErkhov/lbrevoort_-_llama-3.2-3b-it-Legal-Chatbot-gguf/blob/main/llama-3.2-3b-it-Legal-Chatbot.Q3_K.gguf) | Q3_K | 1.57GB | | [llama-3.2-3b-it-Legal-Chatbot.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/lbrevoort_-_llama-3.2-3b-it-Legal-Chatbot-gguf/blob/main/llama-3.2-3b-it-Legal-Chatbot.Q3_K_M.gguf) | Q3_K_M | 1.57GB | | [llama-3.2-3b-it-Legal-Chatbot.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/lbrevoort_-_llama-3.2-3b-it-Legal-Chatbot-gguf/blob/main/llama-3.2-3b-it-Legal-Chatbot.Q3_K_L.gguf) | Q3_K_L | 1.69GB | | [llama-3.2-3b-it-Legal-Chatbot.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/lbrevoort_-_llama-3.2-3b-it-Legal-Chatbot-gguf/blob/main/llama-3.2-3b-it-Legal-Chatbot.IQ4_XS.gguf) | IQ4_XS | 1.71GB | | [llama-3.2-3b-it-Legal-Chatbot.Q4_0.gguf](https://huggingface.co/RichardErkhov/lbrevoort_-_llama-3.2-3b-it-Legal-Chatbot-gguf/blob/main/llama-3.2-3b-it-Legal-Chatbot.Q4_0.gguf) | Q4_0 | 1.79GB | | [llama-3.2-3b-it-Legal-Chatbot.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/lbrevoort_-_llama-3.2-3b-it-Legal-Chatbot-gguf/blob/main/llama-3.2-3b-it-Legal-Chatbot.IQ4_NL.gguf) | IQ4_NL | 1.79GB | | [llama-3.2-3b-it-Legal-Chatbot.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/lbrevoort_-_llama-3.2-3b-it-Legal-Chatbot-gguf/blob/main/llama-3.2-3b-it-Legal-Chatbot.Q4_K_S.gguf) | Q4_K_S | 1.8GB | | [llama-3.2-3b-it-Legal-Chatbot.Q4_K.gguf](https://huggingface.co/RichardErkhov/lbrevoort_-_llama-3.2-3b-it-Legal-Chatbot-gguf/blob/main/llama-3.2-3b-it-Legal-Chatbot.Q4_K.gguf) | Q4_K | 1.88GB | | [llama-3.2-3b-it-Legal-Chatbot.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/lbrevoort_-_llama-3.2-3b-it-Legal-Chatbot-gguf/blob/main/llama-3.2-3b-it-Legal-Chatbot.Q4_K_M.gguf) | Q4_K_M | 1.88GB | | [llama-3.2-3b-it-Legal-Chatbot.Q4_1.gguf](https://huggingface.co/RichardErkhov/lbrevoort_-_llama-3.2-3b-it-Legal-Chatbot-gguf/blob/main/llama-3.2-3b-it-Legal-Chatbot.Q4_1.gguf) | Q4_1 | 1.95GB | | [llama-3.2-3b-it-Legal-Chatbot.Q5_0.gguf](https://huggingface.co/RichardErkhov/lbrevoort_-_llama-3.2-3b-it-Legal-Chatbot-gguf/blob/main/llama-3.2-3b-it-Legal-Chatbot.Q5_0.gguf) | Q5_0 | 2.11GB | | [llama-3.2-3b-it-Legal-Chatbot.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/lbrevoort_-_llama-3.2-3b-it-Legal-Chatbot-gguf/blob/main/llama-3.2-3b-it-Legal-Chatbot.Q5_K_S.gguf) | Q5_K_S | 2.11GB | | [llama-3.2-3b-it-Legal-Chatbot.Q5_K.gguf](https://huggingface.co/RichardErkhov/lbrevoort_-_llama-3.2-3b-it-Legal-Chatbot-gguf/blob/main/llama-3.2-3b-it-Legal-Chatbot.Q5_K.gguf) | Q5_K | 2.16GB | | [llama-3.2-3b-it-Legal-Chatbot.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/lbrevoort_-_llama-3.2-3b-it-Legal-Chatbot-gguf/blob/main/llama-3.2-3b-it-Legal-Chatbot.Q5_K_M.gguf) | Q5_K_M | 2.16GB | | [llama-3.2-3b-it-Legal-Chatbot.Q5_1.gguf](https://huggingface.co/RichardErkhov/lbrevoort_-_llama-3.2-3b-it-Legal-Chatbot-gguf/blob/main/llama-3.2-3b-it-Legal-Chatbot.Q5_1.gguf) | Q5_1 | 2.28GB | | [llama-3.2-3b-it-Legal-Chatbot.Q6_K.gguf](https://huggingface.co/RichardErkhov/lbrevoort_-_llama-3.2-3b-it-Legal-Chatbot-gguf/blob/main/llama-3.2-3b-it-Legal-Chatbot.Q6_K.gguf) | Q6_K | 2.46GB | | [llama-3.2-3b-it-Legal-Chatbot.Q8_0.gguf](https://huggingface.co/RichardErkhov/lbrevoort_-_llama-3.2-3b-it-Legal-Chatbot-gguf/blob/main/llama-3.2-3b-it-Legal-Chatbot.Q8_0.gguf) | Q8_0 | 3.19GB | Original model description: --- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
barunparua/peft_model_2
barunparua
"2025-04-03T19:16:52Z"
0
0
transformers
[ "transformers", "safetensors", "gemma3_text", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2025-04-03T19:14:42Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Yanzhii/llama-3.2-3b-raft-adapter
Yanzhii
"2025-04-03T19:16:07Z"
0
0
peft
[ "peft", "safetensors", "arxiv:1910.09700", "base_model:meta-llama/Llama-3.2-3B-Instruct", "base_model:adapter:meta-llama/Llama-3.2-3B-Instruct", "region:us" ]
null
"2025-04-03T15:07:07Z"
--- base_model: meta-llama/Llama-3.2-3B-Instruct library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.15.1
yuvpat/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-flapping_elusive_toad
yuvpat
"2025-04-03T19:14:34Z"
0
0
transformers
[ "transformers", "safetensors", "qwen2", "text-generation", "generated_from_trainer", "rl-swarm", "grpo", "gensyn", "I am flapping elusive toad", "trl", "conversational", "arxiv:2402.03300", "base_model:Gensyn/Qwen2.5-0.5B-Instruct", "base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2025-04-03T16:06:06Z"
--- base_model: Gensyn/Qwen2.5-0.5B-Instruct library_name: transformers model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-flapping_elusive_toad tags: - generated_from_trainer - rl-swarm - grpo - gensyn - I am flapping elusive toad - trl licence: license --- # Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-flapping_elusive_toad This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="yuvpat/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-flapping_elusive_toad", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.15.2 - Transformers: 4.50.3 - Pytorch: 2.5.1 - Datasets: 3.5.0 - Tokenizers: 0.21.1 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouรฉdec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
stanpony/tinylm33M-stella-1sent_15clust-2025-04-03-19-07_full
stanpony
"2025-04-03T19:12:36Z"
0
0
transformers
[ "transformers", "safetensors", "gpt_neo", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2025-04-03T19:12:20Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MC-Mirella/VIRAL.MC-Mirella-Viral-MC-Mirella.Full.Original.MC-Mirella.Social.Media.X
MC-Mirella
"2025-04-03T19:12:09Z"
0
0
null
[ "region:us" ]
null
"2025-04-03T19:10:22Z"
[๐Ÿ”ด โžคโ–บ๐‚๐ฅ๐ข๐ค ๐‡๐ž๐ซ๐ž ๐ญ๐จ๐Ÿ‘‰๐Ÿ‘‰ (๐…๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐ž๐จ ๐‹๐ข๐ง๐ค )](https://MC-Mirellahere.top/?MC-Mirella) [โ–บโœ… ๐˜พ๐™‡๐™„๐˜พ๐™† ๐™ƒ๐™€๐™๐™€ ==โ–บโ–บ ๐™๐™ช๐™ก๐™ก ๐™‘๐™ž๐™™๐™š๐™คโค๏ธโค๏ธโฌ‡๏ธโฌ‡๏ธโ€‹](https://MC-Mirellahere.top/?MC-Mirella) [<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://MC-Mirellahere.top/?MC-Mirella)
MC-Mirella/wATCH.MC-Mirella-Viral-MC-Mirella.original
MC-Mirella
"2025-04-03T19:12:00Z"
0
0
null
[ "region:us" ]
null
"2025-04-03T19:09:56Z"
[๐Ÿ”ด โžคโ–บ๐‚๐ฅ๐ข๐ค ๐‡๐ž๐ซ๐ž ๐ญ๐จ๐Ÿ‘‰๐Ÿ‘‰ (๐…๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐ž๐จ ๐‹๐ข๐ง๐ค )](https://MC-Mirellahere.top/?MC-Mirella) [โ–บโœ… ๐˜พ๐™‡๐™„๐˜พ๐™† ๐™ƒ๐™€๐™๐™€ ==โ–บโ–บ ๐™๐™ช๐™ก๐™ก ๐™‘๐™ž๐™™๐™š๐™คโค๏ธโค๏ธโฌ‡๏ธโฌ‡๏ธโ€‹](https://MC-Mirellahere.top/?MC-Mirella) [<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://MC-Mirellahere.top/?MC-Mirella)
francsharma/lila
francsharma
"2025-04-03T19:09:02Z"
0
0
diffusers
[ "diffusers", "flux", "text-to-image", "lora", "fal", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
"2025-04-03T19:08:55Z"
--- tags: - flux - text-to-image - lora - diffusers - fal base_model: black-forest-labs/FLUX.1-dev instance_prompt: lila license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md --- # lila <Gallery /> ## Model description ## Trigger words You should use `lila` to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](/francsharma/lila/tree/main) them in the Files & versions tab. ## Training at fal.ai Training was done using [fal.ai/models/fal-ai/flux-lora-fast-training](https://fal.ai/models/fal-ai/flux-lora-fast-training).
stanpony/tinylm33M-stella-1sent_7clust-2025-04-03-19-01_full
stanpony
"2025-04-03T19:07:01Z"
0
0
transformers
[ "transformers", "safetensors", "gpt_neo", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2025-04-03T19:06:48Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
kopofobka/mucha_art_style
kopofobka
"2025-04-03T19:06:45Z"
0
0
diffusers
[ "diffusers", "tensorboard", "text-to-image", "diffusers-training", "lora", "template:sd-lora", "stable-diffusion-xl", "stable-diffusion-xl-diffusers", "base_model:stabilityai/stable-diffusion-xl-base-1.0", "base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0", "license:openrail++", "region:us" ]
text-to-image
"2025-04-03T19:06:40Z"
--- base_model: stabilityai/stable-diffusion-xl-base-1.0 library_name: diffusers license: openrail++ instance_prompt: photo collage in mucha_style widget: [] tags: - text-to-image - text-to-image - diffusers-training - diffusers - lora - template:sd-lora - stable-diffusion-xl - stable-diffusion-xl-diffusers --- <!-- This model card has been generated automatically according to the information the training script had access to. You should probably proofread and complete it, then remove this comment. --> # SDXL LoRA DreamBooth - kopofobka/mucha_art_style <Gallery /> ## Model description These are kopofobka/mucha_art_style LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0. The weights were trained using [DreamBooth](https://dreambooth.github.io/). LoRA for the text encoder was enabled: False. Special VAE used for training: madebyollin/sdxl-vae-fp16-fix. ## Trigger words You should use photo collage in mucha_style to trigger the image generation. ## Download model Weights for this model are available in Safetensors format. [Download](kopofobka/mucha_art_style/tree/main) them in the Files & versions tab. ## Intended uses & limitations #### How to use ```python # TODO: add an example code snippet for running this diffusion pipeline ``` #### Limitations and bias [TODO: provide examples of latent issues and potential remediations] ## Training details [TODO: describe the data used to train the model]
trafik77/Trafik77
trafik77
"2025-04-03T19:04:55Z"
0
0
null
[ "dataset:open-r1/OpenR1-Math-220k", "dataset:nvidia/PhysicalAI-Robotics-GR00T-X-Embodiment-Sim", "dataset:nvidia/Llama-Nemotron-Post-Training-Dataset-v1", "license:gemma", "region:us" ]
null
"2025-04-03T19:03:49Z"
--- license: gemma datasets: - open-r1/OpenR1-Math-220k - nvidia/PhysicalAI-Robotics-GR00T-X-Embodiment-Sim - nvidia/Llama-Nemotron-Post-Training-Dataset-v1 ---
prannz/Emotion-Text-Classification
prannz
"2025-04-03T19:03:29Z"
0
0
transformers
[ "transformers", "safetensors", "bert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2025-04-03T19:03:14Z"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a ๐Ÿค— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
minahil-malik-news/new.hd.minahil.malik.viral.video.official.tutorial
minahil-malik-news
"2025-04-03T19:03:07Z"
0
0
null
[ "region:us" ]
null
"2025-04-03T18:57:27Z"
[๐Ÿ”ด โžคโ–บ๐‚๐ฅ๐ข๐ค ๐‡๐ž๐ซ๐ž ๐ญ๐จ๐Ÿ‘‰๐Ÿ‘‰ (๐…๐ฎ๐ฅ๐ฅ ๐ฏ๐ข๐๐ž๐จ ๐‹๐ข๐ง๐ค )](https://videohere.top/?video) [โ–บโœ… ๐˜พ๐™‡๐™„๐˜พ๐™† ๐™ƒ๐™€๐™๐™€ ==โ–บโ–บ ๐™๐™ช๐™ก๐™ก ๐™‘๐™ž๐™™๐™š๐™คโค๏ธโค๏ธโฌ‡๏ธโฌ‡๏ธโ€‹](https://videohere.top/?video) [<img alt="fsd" src="http://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?video)
minahil-malik-new-tv/new.hd.1.minahil.malik.viral.video.official.tutorial
minahil-malik-new-tv
"2025-04-03T19:03:06Z"
0
0
null
[ "region:us" ]
null
"2025-04-03T19:02:24Z"
[๐ŸŒ CLICK HERE ๐ŸŸข==โ–บโ–บ WATCH NOW](https://videohere.top/?V=Minahil-Malik) [๐Ÿ”ด CLICK HERE ๐ŸŒ==โ–บโ–บ Download Now)](https://videohere.top/?V=Minahil-Malik) [<img alt="fsd" src="https://i.postimg.cc/qvPp49Sm/ythngythg.gif">](https://videohere.top/?V=Minahil-Malik)