Dataset Viewer
Auto-converted to Parquet
Search is not available for this dataset
modelId
stringlengths
5
138
author
stringlengths
2
42
last_modified
unknowndate
2020-02-15 11:33:14
2025-05-01 00:43:14
downloads
int64
0
223M
likes
int64
0
11.7k
library_name
stringclasses
444 values
tags
sequencelengths
1
4.05k
pipeline_tag
stringclasses
54 values
createdAt
unknowndate
2022-03-02 23:29:04
2025-05-01 00:41:49
card
stringlengths
11
1.01M
research-backup/xlm-roberta-large-trimmed-de-75000
research-backup
"2023-03-06T05:40:56"
105
0
transformers
[ "transformers", "pytorch", "xlm-roberta", "fill-mask", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2023-03-06T05:24:36"
# Vocabulary Trimmed [xlm-roberta-large](https://huggingface.co/xlm-roberta-large): `vocabtrimmer/xlm-roberta-large-trimmed-de-75000` This model is a trimmed version of [xlm-roberta-large](https://huggingface.co/xlm-roberta-large) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size. Following table shows a summary of the trimming process. | | xlm-roberta-large | vocabtrimmer/xlm-roberta-large-trimmed-de-75000 | |:---------------------------|:--------------------|:--------------------------------------------------| | parameter_size_full | 560,142,482 | 380,767,482 | | parameter_size_embedding | 256,002,048 | 76,802,048 | | vocab_size | 250,002 | 75,002 | | compression_rate_full | 100.0 | 67.98 | | compression_rate_embedding | 100.0 | 30.0 | Following table shows the parameter used to trim vocabulary. | language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency | |:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:| | de | vocabtrimmer/mc4_validation | text | de | validation | 75000 | 2 |
haobozhang/alpaca-sft-0.0-epoch2
haobozhang
"2024-08-08T18:09:47"
5
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "arxiv:1910.09700", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-08-07T19:29:33"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Hardeep/aromatic-yak
Hardeep
"2023-05-21T13:28:08"
5
0
transformers
[ "transformers", "pytorch", "opt", "text-generation", "gpt", "llm", "large language model", "h2o-llmstudio", "en", "autotrain_compatible", "text-generation-inference", "region:us" ]
text-generation
"2023-05-21T13:13:35"
--- language: - en library_name: transformers tags: - gpt - llm - large language model - h2o-llmstudio inference: false thumbnail: https://h2o.ai/etc.clientlibs/h2o/clientlibs/clientlib-site/resources/images/favicon.ico --- # Model Card ## Summary This model was trained using [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio). - Base model: [facebook/opt-6.7b](https://huggingface.co/facebook/opt-6.7b) ## Usage To use the model with the `transformers` library on a machine with GPUs, first make sure you have the `transformers`, `accelerate` and `torch` libraries installed. ```bash pip install transformers==4.28.1 pip install accelerate==0.18.0 pip install torch==2.0.0 ``` ```python import torch from transformers import pipeline generate_text = pipeline( model="Hardeep/aromatic-yak", torch_dtype=torch.float16, trust_remote_code=True, use_fast=True, device_map={"": "cuda:0"}, ) res = generate_text( "Why is drinking water so healthy?", min_new_tokens=2, max_new_tokens=256, do_sample=False, num_beams=2, temperature=float(0.3), repetition_penalty=float(1.2), renormalize_logits=True ) print(res[0]["generated_text"]) ``` You can print a sample prompt after the preprocessing step to see how it is feed to the tokenizer: ```python print(generate_text.preprocess("Why is drinking water so healthy?")["prompt_text"]) ``` ```bash <|prompt|>Why is drinking water so healthy?</s><|answer|> ``` Alternatively, if you prefer to not use `trust_remote_code=True` you can download [h2oai_pipeline.py](h2oai_pipeline.py), store it alongside your notebook, and construct the pipeline yourself from the loaded model and tokenizer: ```python import torch from h2oai_pipeline import H2OTextGenerationPipeline from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained( "Hardeep/aromatic-yak", use_fast=True, padding_side="left" ) model = AutoModelForCausalLM.from_pretrained( "Hardeep/aromatic-yak", torch_dtype=torch.float16, device_map={"": "cuda:0"} ) generate_text = H2OTextGenerationPipeline(model=model, tokenizer=tokenizer) res = generate_text( "Why is drinking water so healthy?", min_new_tokens=2, max_new_tokens=256, do_sample=False, num_beams=2, temperature=float(0.3), repetition_penalty=float(1.2), renormalize_logits=True ) print(res[0]["generated_text"]) ``` You may also construct the pipeline from the loaded model and tokenizer yourself and consider the preprocessing steps: ```python from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "Hardeep/aromatic-yak" # either local folder or huggingface model name # Important: The prompt needs to be in the same format the model was trained with. # You can find an example prompt in the experiment logs. prompt = "<|prompt|>How are you?</s><|answer|>" tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=True) model = AutoModelForCausalLM.from_pretrained(model_name) model.cuda().eval() inputs = tokenizer(prompt, return_tensors="pt", add_special_tokens=False).to("cuda") # generate configuration can be modified to your needs tokens = model.generate( **inputs, min_new_tokens=2, max_new_tokens=256, do_sample=False, num_beams=2, temperature=float(0.3), repetition_penalty=float(1.2), renormalize_logits=True )[0] tokens = tokens[inputs["input_ids"].shape[1]:] answer = tokenizer.decode(tokens, skip_special_tokens=True) print(answer) ``` ## Model Architecture ``` OPTForCausalLM( (model): OPTModel( (decoder): OPTDecoder( (embed_tokens): Embedding(50272, 4096, padding_idx=1) (embed_positions): OPTLearnedPositionalEmbedding(2050, 4096) (final_layer_norm): LayerNorm((4096,), eps=1e-05, elementwise_affine=True) (layers): ModuleList( (0-31): 32 x OPTDecoderLayer( (self_attn): OPTAttention( (k_proj): Linear(in_features=4096, out_features=4096, bias=True) (v_proj): Linear(in_features=4096, out_features=4096, bias=True) (q_proj): Linear(in_features=4096, out_features=4096, bias=True) (out_proj): Linear(in_features=4096, out_features=4096, bias=True) ) (activation_fn): ReLU() (self_attn_layer_norm): LayerNorm((4096,), eps=1e-05, elementwise_affine=True) (fc1): Linear(in_features=4096, out_features=16384, bias=True) (fc2): Linear(in_features=16384, out_features=4096, bias=True) (final_layer_norm): LayerNorm((4096,), eps=1e-05, elementwise_affine=True) ) ) ) ) (lm_head): Linear(in_features=4096, out_features=50272, bias=False) ) ``` ## Model Configuration This model was trained using H2O LLM Studio and with the configuration in [cfg.yaml](cfg.yaml). Visit [H2O LLM Studio](https://github.com/h2oai/h2o-llmstudio) to learn how to train your own large language models. ## Model Validation Model validation results using [EleutherAI lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness). ```bash CUDA_VISIBLE_DEVICES=0 python main.py --model hf-causal-experimental --model_args pretrained=Hardeep/aromatic-yak --tasks openbookqa,arc_easy,winogrande,hellaswag,arc_challenge,piqa,boolq --device cuda &> eval.log ``` ## Disclaimer Please read this disclaimer carefully before using the large language model provided in this repository. Your use of the model signifies your agreement to the following terms and conditions. - Biases and Offensiveness: The large language model is trained on a diverse range of internet text data, which may contain biased, racist, offensive, or otherwise inappropriate content. By using this model, you acknowledge and accept that the generated content may sometimes exhibit biases or produce content that is offensive or inappropriate. The developers of this repository do not endorse, support, or promote any such content or viewpoints. - Limitations: The large language model is an AI-based tool and not a human. It may produce incorrect, nonsensical, or irrelevant responses. It is the user's responsibility to critically evaluate the generated content and use it at their discretion. - Use at Your Own Risk: Users of this large language model must assume full responsibility for any consequences that may arise from their use of the tool. The developers and contributors of this repository shall not be held liable for any damages, losses, or harm resulting from the use or misuse of the provided model. - Ethical Considerations: Users are encouraged to use the large language model responsibly and ethically. By using this model, you agree not to use it for purposes that promote hate speech, discrimination, harassment, or any form of illegal or harmful activities. - Reporting Issues: If you encounter any biased, offensive, or otherwise inappropriate content generated by the large language model, please report it to the repository maintainers through the provided channels. Your feedback will help improve the model and mitigate potential issues. - Changes to this Disclaimer: The developers of this repository reserve the right to modify or update this disclaimer at any time without prior notice. It is the user's responsibility to periodically review the disclaimer to stay informed about any changes. By using the large language model provided in this repository, you agree to accept and comply with the terms and conditions outlined in this disclaimer. If you do not agree with any part of this disclaimer, you should refrain from using the model and any content generated by it.
Aryanne/Llama-2-4B-gguf
Aryanne
"2023-09-23T23:58:55"
46
3
null
[ "gguf", "text-generation", "endpoints_compatible", "region:us" ]
text-generation
"2023-09-23T23:17:45"
--- pipeline_tag: text-generation --- GGML/GGUF(v2) Quantizations of the model: https://huggingface.co/winglian/llama-2-4b Which is a Llama2 4B model based on Llama2 7B.
DavidMicheal007/ODDBOSS
DavidMicheal007
"2025-04-22T13:22:18"
0
1
null
[ "license:mit", "region:us" ]
null
"2025-04-22T13:22:18"
<!DOCTYPE html> <html class="" lang="en"> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no" /> <meta name="description" content="We're on a journey to advance and democratize artificial intelligence through open source and open science." /> <meta property="fb:app_id" content="1321688464574422" /> <meta name="twitter:card" content="summary_large_image" /> <meta name="twitter:site" content="@huggingface" /> <meta property="og:title" content="Hugging Face - The AI community building the future." /> <meta property="og:type" content="website" /> <title>Hugging Face - The AI community building the future.</title> <style> body { margin: 0; } main { background-color: white; min-height: 100vh; padding: 7rem 1rem 8rem 1rem; text-align: center; font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system, BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans, sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol, Noto Color Emoji; } img { width: 6rem; height: 6rem; margin: 0 auto 1rem; } h1 { font-size: 3.75rem; line-height: 1; color: rgba(31, 41, 55, 1); font-weight: 700; box-sizing: border-box; margin: 0 auto; } p, a { color: rgba(107, 114, 128, 1); font-size: 1.125rem; line-height: 1.75rem; max-width: 28rem; box-sizing: border-box; margin: 0 auto; } .dark main { background-color: rgb(11, 15, 25); } .dark h1 { color: rgb(209, 213, 219); } .dark p, .dark a { color: rgb(156, 163, 175); } </style> <script> // On page load or when changing themes, best to add inline in `head` to avoid FOUC const key = "_tb_global_settings"; let theme = window.matchMedia("(prefers-color-scheme: dark)").matches ? "dark" : "light"; try { const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme; if (storageTheme) { theme = storageTheme === "dark" ? "dark" : "light"; } } catch (e) {} if (theme === "dark") { document.documentElement.classList.add("dark"); } else { document.documentElement.classList.remove("dark"); } </script> </head> <body> <main> <img src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg" alt="" /> <div> <h1>429</h1> <p>We had to rate limit you. If you think it's an error, send us <a href="mailto:website@huggingface.co">an email</a></p> </div> </main> </body> </html>
orcn/llama-completion-10epoch
orcn
"2025-03-13T18:49:32"
4
0
transformers
[ "transformers", "safetensors", "mllama", "image-text-to-text", "text-generation-inference", "unsloth", "conversational", "en", "base_model:unsloth/Llama-3.2-11B-Vision-Instruct", "base_model:finetune:unsloth/Llama-3.2-11B-Vision-Instruct", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-text-to-text
"2025-03-12T23:40:19"
--- base_model: unsloth/Llama-3.2-11B-Vision-Instruct tags: - text-generation-inference - transformers - unsloth - mllama license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** orcn - **License:** apache-2.0 - **Finetuned from model :** unsloth/Llama-3.2-11B-Vision-Instruct This mllama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
nimbul/LobotomizedCogito
nimbul
"2025-04-18T18:57:10"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2025-04-18T18:57:07"
<!DOCTYPE html> <html class="" lang="en"> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no" /> <meta name="description" content="We're on a journey to advance and democratize artificial intelligence through open source and open science." /> <meta property="fb:app_id" content="1321688464574422" /> <meta name="twitter:card" content="summary_large_image" /> <meta name="twitter:site" content="@huggingface" /> <meta property="og:title" content="Hugging Face - The AI community building the future." /> <meta property="og:type" content="website" /> <title>Hugging Face - The AI community building the future.</title> <style> body { margin: 0; } main { background-color: white; min-height: 100vh; padding: 7rem 1rem 8rem 1rem; text-align: center; font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system, BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans, sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol, Noto Color Emoji; } img { width: 6rem; height: 6rem; margin: 0 auto 1rem; } h1 { font-size: 3.75rem; line-height: 1; color: rgba(31, 41, 55, 1); font-weight: 700; box-sizing: border-box; margin: 0 auto; } p, a { color: rgba(107, 114, 128, 1); font-size: 1.125rem; line-height: 1.75rem; max-width: 28rem; box-sizing: border-box; margin: 0 auto; } .dark main { background-color: rgb(11, 15, 25); } .dark h1 { color: rgb(209, 213, 219); } .dark p, .dark a { color: rgb(156, 163, 175); } </style> <script> // On page load or when changing themes, best to add inline in `head` to avoid FOUC const key = "_tb_global_settings"; let theme = window.matchMedia("(prefers-color-scheme: dark)").matches ? "dark" : "light"; try { const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme; if (storageTheme) { theme = storageTheme === "dark" ? "dark" : "light"; } } catch (e) {} if (theme === "dark") { document.documentElement.classList.add("dark"); } else { document.documentElement.classList.remove("dark"); } </script> </head> <body> <main> <img src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg" alt="" /> <div> <h1>429</h1> <p>We had to rate limit you. If you think it's an error, send us <a href="mailto:website@huggingface.co">an email</a></p> </div> </main> </body> </html>
klashenrik/reinforce-pixelcopter-v1
klashenrik
"2023-01-07T20:31:02"
0
0
null
[ "Pixelcopter-PLE-v0", "reinforce", "reinforcement-learning", "custom-implementation", "deep-rl-class", "model-index", "region:us" ]
reinforcement-learning
"2023-01-07T12:53:20"
--- tags: - Pixelcopter-PLE-v0 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: reinforce-pixelcopter-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Pixelcopter-PLE-v0 type: Pixelcopter-PLE-v0 metrics: - type: mean_reward value: 46.90 +/- 27.81 name: mean_reward verified: false --- # **Reinforce** Agent playing **Pixelcopter-PLE-v0** This is a trained model of a **Reinforce** agent playing **Pixelcopter-PLE-v0** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
AlignmentResearch/robust_llm_pythia-wl-14m-mz-ada-v3-ch-143000
AlignmentResearch
"2024-03-26T11:32:13"
103
0
transformers
[ "transformers", "safetensors", "gpt_neox", "text-classification", "generated_from_trainer", "base_model:EleutherAI/pythia-14m", "base_model:finetune:EleutherAI/pythia-14m", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-classification
"2024-03-26T11:32:08"
--- tags: - generated_from_trainer base_model: EleutherAI/pythia-14m model-index: - name: robust_llm_pythia-wl-14m-mz-ada-v3-ch-143000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # robust_llm_pythia-wl-14m-mz-ada-v3-ch-143000 This model is a fine-tuned version of [EleutherAI/pythia-14m](https://huggingface.co/EleutherAI/pythia-14m) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results ### Framework versions - Transformers 4.37.2 - Pytorch 2.2.0 - Datasets 2.17.0 - Tokenizers 0.15.2
RichardErkhov/kenken6696_-_Llama-3.2-3B_4x3_fix_tail-gguf
RichardErkhov
"2025-03-25T13:00:43"
0
0
null
[ "gguf", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
"2025-03-25T11:57:55"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Llama-3.2-3B_4x3_fix_tail - GGUF - Model creator: https://huggingface.co/kenken6696/ - Original model: https://huggingface.co/kenken6696/Llama-3.2-3B_4x3_fix_tail/ | Name | Quant method | Size | | ---- | ---- | ---- | | [Llama-3.2-3B_4x3_fix_tail.Q2_K.gguf](https://huggingface.co/RichardErkhov/kenken6696_-_Llama-3.2-3B_4x3_fix_tail-gguf/blob/main/Llama-3.2-3B_4x3_fix_tail.Q2_K.gguf) | Q2_K | 1.27GB | | [Llama-3.2-3B_4x3_fix_tail.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/kenken6696_-_Llama-3.2-3B_4x3_fix_tail-gguf/blob/main/Llama-3.2-3B_4x3_fix_tail.IQ3_XS.gguf) | IQ3_XS | 1.38GB | | [Llama-3.2-3B_4x3_fix_tail.IQ3_S.gguf](https://huggingface.co/RichardErkhov/kenken6696_-_Llama-3.2-3B_4x3_fix_tail-gguf/blob/main/Llama-3.2-3B_4x3_fix_tail.IQ3_S.gguf) | IQ3_S | 1.44GB | | [Llama-3.2-3B_4x3_fix_tail.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/kenken6696_-_Llama-3.2-3B_4x3_fix_tail-gguf/blob/main/Llama-3.2-3B_4x3_fix_tail.Q3_K_S.gguf) | Q3_K_S | 1.44GB | | [Llama-3.2-3B_4x3_fix_tail.IQ3_M.gguf](https://huggingface.co/RichardErkhov/kenken6696_-_Llama-3.2-3B_4x3_fix_tail-gguf/blob/main/Llama-3.2-3B_4x3_fix_tail.IQ3_M.gguf) | IQ3_M | 1.49GB | | [Llama-3.2-3B_4x3_fix_tail.Q3_K.gguf](https://huggingface.co/RichardErkhov/kenken6696_-_Llama-3.2-3B_4x3_fix_tail-gguf/blob/main/Llama-3.2-3B_4x3_fix_tail.Q3_K.gguf) | Q3_K | 1.57GB | | [Llama-3.2-3B_4x3_fix_tail.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/kenken6696_-_Llama-3.2-3B_4x3_fix_tail-gguf/blob/main/Llama-3.2-3B_4x3_fix_tail.Q3_K_M.gguf) | Q3_K_M | 1.57GB | | [Llama-3.2-3B_4x3_fix_tail.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/kenken6696_-_Llama-3.2-3B_4x3_fix_tail-gguf/blob/main/Llama-3.2-3B_4x3_fix_tail.Q3_K_L.gguf) | Q3_K_L | 1.69GB | | [Llama-3.2-3B_4x3_fix_tail.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/kenken6696_-_Llama-3.2-3B_4x3_fix_tail-gguf/blob/main/Llama-3.2-3B_4x3_fix_tail.IQ4_XS.gguf) | IQ4_XS | 1.71GB | | [Llama-3.2-3B_4x3_fix_tail.Q4_0.gguf](https://huggingface.co/RichardErkhov/kenken6696_-_Llama-3.2-3B_4x3_fix_tail-gguf/blob/main/Llama-3.2-3B_4x3_fix_tail.Q4_0.gguf) | Q4_0 | 1.79GB | | [Llama-3.2-3B_4x3_fix_tail.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/kenken6696_-_Llama-3.2-3B_4x3_fix_tail-gguf/blob/main/Llama-3.2-3B_4x3_fix_tail.IQ4_NL.gguf) | IQ4_NL | 1.79GB | | [Llama-3.2-3B_4x3_fix_tail.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/kenken6696_-_Llama-3.2-3B_4x3_fix_tail-gguf/blob/main/Llama-3.2-3B_4x3_fix_tail.Q4_K_S.gguf) | Q4_K_S | 1.8GB | | [Llama-3.2-3B_4x3_fix_tail.Q4_K.gguf](https://huggingface.co/RichardErkhov/kenken6696_-_Llama-3.2-3B_4x3_fix_tail-gguf/blob/main/Llama-3.2-3B_4x3_fix_tail.Q4_K.gguf) | Q4_K | 1.88GB | | [Llama-3.2-3B_4x3_fix_tail.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/kenken6696_-_Llama-3.2-3B_4x3_fix_tail-gguf/blob/main/Llama-3.2-3B_4x3_fix_tail.Q4_K_M.gguf) | Q4_K_M | 1.88GB | | [Llama-3.2-3B_4x3_fix_tail.Q4_1.gguf](https://huggingface.co/RichardErkhov/kenken6696_-_Llama-3.2-3B_4x3_fix_tail-gguf/blob/main/Llama-3.2-3B_4x3_fix_tail.Q4_1.gguf) | Q4_1 | 1.95GB | | [Llama-3.2-3B_4x3_fix_tail.Q5_0.gguf](https://huggingface.co/RichardErkhov/kenken6696_-_Llama-3.2-3B_4x3_fix_tail-gguf/blob/main/Llama-3.2-3B_4x3_fix_tail.Q5_0.gguf) | Q5_0 | 2.11GB | | [Llama-3.2-3B_4x3_fix_tail.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/kenken6696_-_Llama-3.2-3B_4x3_fix_tail-gguf/blob/main/Llama-3.2-3B_4x3_fix_tail.Q5_K_S.gguf) | Q5_K_S | 2.11GB | | [Llama-3.2-3B_4x3_fix_tail.Q5_K.gguf](https://huggingface.co/RichardErkhov/kenken6696_-_Llama-3.2-3B_4x3_fix_tail-gguf/blob/main/Llama-3.2-3B_4x3_fix_tail.Q5_K.gguf) | Q5_K | 2.16GB | | [Llama-3.2-3B_4x3_fix_tail.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/kenken6696_-_Llama-3.2-3B_4x3_fix_tail-gguf/blob/main/Llama-3.2-3B_4x3_fix_tail.Q5_K_M.gguf) | Q5_K_M | 2.16GB | | [Llama-3.2-3B_4x3_fix_tail.Q5_1.gguf](https://huggingface.co/RichardErkhov/kenken6696_-_Llama-3.2-3B_4x3_fix_tail-gguf/blob/main/Llama-3.2-3B_4x3_fix_tail.Q5_1.gguf) | Q5_1 | 2.28GB | | [Llama-3.2-3B_4x3_fix_tail.Q6_K.gguf](https://huggingface.co/RichardErkhov/kenken6696_-_Llama-3.2-3B_4x3_fix_tail-gguf/blob/main/Llama-3.2-3B_4x3_fix_tail.Q6_K.gguf) | Q6_K | 2.46GB | | [Llama-3.2-3B_4x3_fix_tail.Q8_0.gguf](https://huggingface.co/RichardErkhov/kenken6696_-_Llama-3.2-3B_4x3_fix_tail-gguf/blob/main/Llama-3.2-3B_4x3_fix_tail.Q8_0.gguf) | Q8_0 | 3.19GB | Original model description: --- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
MK-Mostafa/dummy-model
MK-Mostafa
"2025-03-16T15:58:05"
0
0
transformers
[ "transformers", "tf", "camembert", "fill-mask", "generated_from_keras_callback", "base_model:almanach/camembert-base", "base_model:finetune:almanach/camembert-base", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2025-03-16T15:55:39"
--- library_name: transformers license: mit base_model: camembert-base tags: - generated_from_keras_callback model-index: - name: dummy-model results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # dummy-model This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.48.3 - TensorFlow 2.18.0 - Tokenizers 0.21.0
ramybaly/ner_nerd_fine
ramybaly
"2021-08-20T19:01:06"
8
0
transformers
[ "transformers", "pytorch", "tensorboard", "bert", "token-classification", "generated_from_trainer", "dataset:nerd", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
token-classification
"2022-03-02T23:29:05"
--- license: apache-2.0 tags: - generated_from_trainer datasets: - nerd metrics: - precision - recall - f1 - accuracy model_index: - name: ner_nerd_fine results: - task: name: Token Classification type: token-classification dataset: name: nerd type: nerd args: nerd metric: name: Accuracy type: accuracy value: 0.9050232835369201 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ner_nerd_fine This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the nerd dataset. It achieves the following results on the evaluation set: - Loss: 0.3373 - Precision: 0.6326 - Recall: 0.6734 - F1: 0.6524 - Accuracy: 0.9050 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.6219 | 1.0 | 8235 | 0.3347 | 0.6066 | 0.6581 | 0.6313 | 0.9015 | | 0.3071 | 2.0 | 16470 | 0.3165 | 0.6349 | 0.6637 | 0.6490 | 0.9060 | | 0.2384 | 3.0 | 24705 | 0.3311 | 0.6373 | 0.6769 | 0.6565 | 0.9068 | | 0.1834 | 4.0 | 32940 | 0.3414 | 0.6349 | 0.6780 | 0.6557 | 0.9069 | | 0.1392 | 5.0 | 41175 | 0.3793 | 0.6334 | 0.6775 | 0.6547 | 0.9068 | ### Framework versions - Transformers 4.9.1 - Pytorch 1.9.0+cu102 - Datasets 1.11.0 - Tokenizers 0.10.2
damgomz/ft_bs32_lr7_base_x8
damgomz
"2024-05-21T04:51:30"
106
0
transformers
[ "transformers", "safetensors", "albert", "text-classification", "fill-mask", "en", "autotrain_compatible", "endpoints_compatible", "region:us" ]
fill-mask
"2024-05-20T21:06:02"
--- language: en tags: - fill-mask kwargs: timestamp: '2024-05-21T06:51:24' project_name: ft_bs32_lr7_base_x8_emissions_tracker run_id: d86ab92c-42c5-46ae-a58f-cb705b0a7a8b duration: 29443.913482666016 emissions: 0.0192615910805578 emissions_rate: 6.54179040836314e-07 cpu_power: 42.5 gpu_power: 0.0 ram_power: 7.5 cpu_energy: 0.3476012287669722 gpu_energy: 0 ram_energy: 0.0613410671621561 energy_consumed: 0.4089422959291282 country_name: Switzerland country_iso_code: CHE region: .nan cloud_provider: .nan cloud_region: .nan os: Linux-5.14.0-70.30.1.el9_0.x86_64-x86_64-with-glibc2.34 python_version: 3.10.4 codecarbon_version: 2.3.4 cpu_count: 3 cpu_model: Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz gpu_count: .nan gpu_model: .nan longitude: .nan latitude: .nan ram_total_size: 20 tracking_mode: machine on_cloud: N pue: 1.0 --- ## Environmental Impact (CODE CARBON DEFAULT) | Metric | Value | |--------------------------|---------------------------------| | Duration (in seconds) | 29443.913482666016 | | Emissions (Co2eq in kg) | 0.0192615910805578 | | CPU power (W) | 42.5 | | GPU power (W) | [No GPU] | | RAM power (W) | 7.5 | | CPU energy (kWh) | 0.3476012287669722 | | GPU energy (kWh) | [No GPU] | | RAM energy (kWh) | 0.0613410671621561 | | Consumed energy (kWh) | 0.4089422959291282 | | Country name | Switzerland | | Cloud provider | nan | | Cloud region | nan | | CPU count | 3 | | CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz | | GPU count | nan | | GPU model | nan | ## Environmental Impact (for one core) | Metric | Value | |--------------------------|---------------------------------| | CPU energy (kWh) | 0.056679533454132076 | | Emissions (Co2eq in kg) | 0.011532199447377522 | ## Note 20 May 2024 ## My Config | Config | Value | |--------------------------|-----------------| | checkpoint | albert-base-v2 | | model_name | ft_bs32_lr7_base_x8 | | sequence_length | 400 | | num_epoch | 20 | | learning_rate | 5e-07 | | batch_size | 32 | | weight_decay | 0.0 | | warm_up_prop | 0.0 | | drop_out_prob | 0.1 | | packing_length | 100 | | train_test_split | 0.2 | | num_steps | 108600 | ## Training and Testing steps Epoch | Train Loss | Test Loss | Accuracy | Recall ---|---|---|---|--- | 0 | 0.599385 | 0.533520 | 0.732695 | 0.743865 | | 1 | 0.497255 | 0.495337 | 0.756996 | 0.874233 | | 2 | 0.456973 | 0.457591 | 0.777614 | 0.812883 | | 3 | 0.428078 | 0.435462 | 0.792342 | 0.811350 | | 4 | 0.405985 | 0.418146 | 0.806333 | 0.865031 | | 5 | 0.386763 | 0.402823 | 0.818851 | 0.852761 | | 6 | 0.370968 | 0.398841 | 0.818115 | 0.819018 | | 7 | 0.361504 | 0.389461 | 0.822533 | 0.865031 | | 8 | 0.348315 | 0.386434 | 0.828424 | 0.881902 | | 9 | 0.339924 | 0.381690 | 0.829897 | 0.820552 | | 10 | 0.333508 | 0.379336 | 0.829161 | 0.869632 | | 11 | 0.327714 | 0.375907 | 0.831370 | 0.860429 | | 12 | 0.319972 | 0.372091 | 0.835052 | 0.861963 | | 13 | 0.311965 | 0.373268 | 0.833579 | 0.829755 | | 14 | 0.307354 | 0.374971 | 0.834315 | 0.835890 | | 15 | 0.303944 | 0.373268 | 0.835052 | 0.874233 | | 16 | 0.297742 | 0.387149 | 0.831370 | 0.906442 | | 17 | 0.288179 | 0.376481 | 0.837997 | 0.878834 | | 18 | 0.284836 | 0.380563 | 0.834315 | 0.892638 | | 19 | 0.279182 | 0.376233 | 0.835788 | 0.843558 |
Cassiolima/distilbert-base-uncased-finetuned-emotion
Cassiolima
"2025-03-06T14:46:08"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2025-03-06T13:37:08"
--- library_name: transformers license: apache-2.0 base_model: distilbert-base-uncased tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2129 - Accuracy: 0.9285 - F1: 0.9286 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8267 | 1.0 | 250 | 0.3161 | 0.904 | 0.9026 | | 0.2545 | 2.0 | 500 | 0.2129 | 0.9285 | 0.9286 | ### Framework versions - Transformers 4.48.3 - Pytorch 2.5.1+cu124 - Tokenizers 0.21.0
Infralise123/easydownload
Infralise123
"2023-03-25T16:36:08"
0
0
null
[ "license:openrail", "region:us" ]
null
"2023-03-25T14:07:37"
--- license: openrail --- MIXPROV3: https://civitai.com/models/7241/mix-pro-v3 CETUS MIX: https://civitai.com/models/6755/cetus-mix BestQuality-PastelMix: https://civitai.com/models/22045/bestquality-pastelmix VELA MIX: https://civitai.com/models/21367/vela-mix
mradermacher/solar-sakura-carbonvillain-19b-v1-GGUF
mradermacher
"2024-11-04T12:20:16"
62
1
transformers
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:chenhaodev/solar-sakura-carbonvillain-19b-v1", "base_model:quantized:chenhaodev/solar-sakura-carbonvillain-19b-v1", "endpoints_compatible", "region:us", "conversational" ]
null
"2024-11-04T02:48:37"
--- base_model: chenhaodev/solar-sakura-carbonvillain-19b-v1 language: - en library_name: transformers quantized_by: mradermacher tags: - mergekit - merge --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/chenhaodev/solar-sakura-carbonvillain-19b-v1 <!-- provided-files --> weighted/imatrix quants are available at https://huggingface.co/mradermacher/solar-sakura-carbonvillain-19b-v1-i1-GGUF ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/solar-sakura-carbonvillain-19b-v1-GGUF/resolve/main/solar-sakura-carbonvillain-19b-v1.Q2_K.gguf) | Q2_K | 4.1 | | | [GGUF](https://huggingface.co/mradermacher/solar-sakura-carbonvillain-19b-v1-GGUF/resolve/main/solar-sakura-carbonvillain-19b-v1.Q3_K_S.gguf) | Q3_K_S | 4.8 | | | [GGUF](https://huggingface.co/mradermacher/solar-sakura-carbonvillain-19b-v1-GGUF/resolve/main/solar-sakura-carbonvillain-19b-v1.Q3_K_M.gguf) | Q3_K_M | 5.3 | lower quality | | [GGUF](https://huggingface.co/mradermacher/solar-sakura-carbonvillain-19b-v1-GGUF/resolve/main/solar-sakura-carbonvillain-19b-v1.Q3_K_L.gguf) | Q3_K_L | 5.8 | | | [GGUF](https://huggingface.co/mradermacher/solar-sakura-carbonvillain-19b-v1-GGUF/resolve/main/solar-sakura-carbonvillain-19b-v1.IQ4_XS.gguf) | IQ4_XS | 5.9 | | | [GGUF](https://huggingface.co/mradermacher/solar-sakura-carbonvillain-19b-v1-GGUF/resolve/main/solar-sakura-carbonvillain-19b-v1.Q4_0_4_4.gguf) | Q4_0_4_4 | 6.2 | fast on arm, low quality | | [GGUF](https://huggingface.co/mradermacher/solar-sakura-carbonvillain-19b-v1-GGUF/resolve/main/solar-sakura-carbonvillain-19b-v1.Q4_K_S.gguf) | Q4_K_S | 6.2 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/solar-sakura-carbonvillain-19b-v1-GGUF/resolve/main/solar-sakura-carbonvillain-19b-v1.Q4_K_M.gguf) | Q4_K_M | 6.6 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/solar-sakura-carbonvillain-19b-v1-GGUF/resolve/main/solar-sakura-carbonvillain-19b-v1.Q5_K_S.gguf) | Q5_K_S | 7.5 | | | [GGUF](https://huggingface.co/mradermacher/solar-sakura-carbonvillain-19b-v1-GGUF/resolve/main/solar-sakura-carbonvillain-19b-v1.Q5_K_M.gguf) | Q5_K_M | 7.7 | | | [GGUF](https://huggingface.co/mradermacher/solar-sakura-carbonvillain-19b-v1-GGUF/resolve/main/solar-sakura-carbonvillain-19b-v1.Q6_K.gguf) | Q6_K | 8.9 | very good quality | | [GGUF](https://huggingface.co/mradermacher/solar-sakura-carbonvillain-19b-v1-GGUF/resolve/main/solar-sakura-carbonvillain-19b-v1.Q8_0.gguf) | Q8_0 | 11.5 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/solar-sakura-carbonvillain-19b-v1-GGUF/resolve/main/solar-sakura-carbonvillain-19b-v1.f16.gguf) | f16 | 21.6 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
John6666/kodorad-v20-sdxl
John6666
"2025-01-25T05:08:10"
166
0
diffusers
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "stable-diffusion-xl", "realistic", "photorealistic", "asian", "Japanese", "pony", "en", "license:other", "autotrain_compatible", "endpoints_compatible", "diffusers:StableDiffusionXLPipeline", "region:us" ]
text-to-image
"2025-01-25T05:01:05"
--- license: other license_name: faipl-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ language: - en library_name: diffusers pipeline_tag: text-to-image tags: - text-to-image - stable-diffusion - stable-diffusion-xl - realistic - photorealistic - asian - Japanese - pony --- Original model is [here](https://civitai.com/models/1078939/kodorad?modelVersionId=1325223). This model created by [Kodora](https://civitai.com/user/Kodora).
phungkhaccuong/d2c9ea4c-68fd-29ea-df17-087dc9949829
phungkhaccuong
"2025-01-09T11:07:39"
8
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:NousResearch/Hermes-2-Theta-Llama-3-8B", "base_model:adapter:NousResearch/Hermes-2-Theta-Llama-3-8B", "license:apache-2.0", "region:us" ]
null
"2025-01-09T07:58:05"
--- library_name: peft license: apache-2.0 base_model: NousResearch/Hermes-2-Theta-Llama-3-8B tags: - axolotl - generated_from_trainer model-index: - name: d2c9ea4c-68fd-29ea-df17-087dc9949829 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: NousResearch/Hermes-2-Theta-Llama-3-8B bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 5fa8cf3b3f8c4849_train_data.json ds_type: json format: custom path: /workspace/input_data/5fa8cf3b3f8c4849_train_data.json type: field_instruction: references field_output: prompt format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 5 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: false group_by_length: false hub_model_id: phungkhaccuong/d2c9ea4c-68fd-29ea-df17-087dc9949829 hub_repo: null hub_strategy: checkpoint hub_token: null learning_rate: 0.0001 load_in_4bit: false load_in_8bit: false local_rank: null logging_steps: 5 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 1 micro_batch_size: 2 mlflow_experiment_name: /tmp/5fa8cf3b3f8c4849_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 4 sequence_len: 512 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 9090231f-fde2-4443-9c5f-27c54a06b688 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 9090231f-fde2-4443-9c5f-27c54a06b688 warmup_steps: 1 weight_decay: 0.0 xformers_attention: null ``` </details><br> # d2c9ea4c-68fd-29ea-df17-087dc9949829 This model is a fine-tuned version of [NousResearch/Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B) on the None dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 2 - training_steps: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | No log | 0.0000 | 1 | 5.5370 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
DeathCover1003/whisper-tiny_to_japanese_accent_4000_5e-6
DeathCover1003
"2025-03-09T22:11:15"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "whisper", "automatic-speech-recognition", "hf-asr-leaderboard", "generated_from_trainer", "en", "dataset:Japanese_english", "base_model:openai/whisper-tiny", "base_model:finetune:openai/whisper-tiny", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
automatic-speech-recognition
"2025-03-09T19:25:23"
--- library_name: transformers language: - en license: apache-2.0 base_model: openai/whisper-tiny tags: - hf-asr-leaderboard - generated_from_trainer datasets: - Japanese_english metrics: - wer model-index: - name: Whisper tiny Japanese results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Japanese English type: Japanese_english args: 'config: default, split: test' metrics: - name: Wer type: wer value: 22.274436090225564 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper tiny Japanese This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Japanese English dataset. It achieves the following results on the evaluation set: - Loss: 0.4847 - Wer: 22.2744 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 2 - eval_batch_size: 1 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:-------:| | 0.1914 | 1.2438 | 1000 | 0.4866 | 22.9167 | | 0.1464 | 2.4876 | 2000 | 0.4643 | 22.9010 | | 0.0722 | 3.7313 | 3000 | 0.4761 | 21.9455 | | 0.0503 | 4.9751 | 4000 | 0.4847 | 22.2744 | ### Framework versions - Transformers 4.50.0.dev0 - Pytorch 2.5.1+cu124 - Datasets 3.3.2 - Tokenizers 0.21.0
mradermacher/Qwen2.5-0.5B-Chunk-Compressor-GGUF
mradermacher
"2025-02-19T03:30:33"
0
0
transformers
[ "transformers", "gguf", "pt", "base_model:cnmoro/Qwen2.5-0.5B-Chunk-Compressor", "base_model:quantized:cnmoro/Qwen2.5-0.5B-Chunk-Compressor", "license:mit", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-02-19T03:25:02"
--- base_model: cnmoro/Qwen2.5-0.5B-Chunk-Compressor language: - pt library_name: transformers license: mit quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: --> static quants of https://huggingface.co/cnmoro/Qwen2.5-0.5B-Chunk-Compressor <!-- provided-files --> weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion. ## Usage If you are unsure how to use GGUF files, refer to one of [TheBloke's READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for more details, including on how to concatenate multi-part files. ## Provided Quants (sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants) | Link | Type | Size/GB | Notes | |:-----|:-----|--------:|:------| | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Chunk-Compressor-GGUF/resolve/main/Qwen2.5-0.5B-Chunk-Compressor.Q3_K_S.gguf) | Q3_K_S | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Chunk-Compressor-GGUF/resolve/main/Qwen2.5-0.5B-Chunk-Compressor.Q2_K.gguf) | Q2_K | 0.4 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Chunk-Compressor-GGUF/resolve/main/Qwen2.5-0.5B-Chunk-Compressor.IQ4_XS.gguf) | IQ4_XS | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Chunk-Compressor-GGUF/resolve/main/Qwen2.5-0.5B-Chunk-Compressor.Q3_K_M.gguf) | Q3_K_M | 0.5 | lower quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Chunk-Compressor-GGUF/resolve/main/Qwen2.5-0.5B-Chunk-Compressor.Q3_K_L.gguf) | Q3_K_L | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Chunk-Compressor-GGUF/resolve/main/Qwen2.5-0.5B-Chunk-Compressor.Q4_K_S.gguf) | Q4_K_S | 0.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Chunk-Compressor-GGUF/resolve/main/Qwen2.5-0.5B-Chunk-Compressor.Q4_K_M.gguf) | Q4_K_M | 0.5 | fast, recommended | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Chunk-Compressor-GGUF/resolve/main/Qwen2.5-0.5B-Chunk-Compressor.Q5_K_S.gguf) | Q5_K_S | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Chunk-Compressor-GGUF/resolve/main/Qwen2.5-0.5B-Chunk-Compressor.Q5_K_M.gguf) | Q5_K_M | 0.5 | | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Chunk-Compressor-GGUF/resolve/main/Qwen2.5-0.5B-Chunk-Compressor.Q6_K.gguf) | Q6_K | 0.6 | very good quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Chunk-Compressor-GGUF/resolve/main/Qwen2.5-0.5B-Chunk-Compressor.Q8_0.gguf) | Q8_0 | 0.6 | fast, best quality | | [GGUF](https://huggingface.co/mradermacher/Qwen2.5-0.5B-Chunk-Compressor-GGUF/resolve/main/Qwen2.5-0.5B-Chunk-Compressor.f16.gguf) | f16 | 1.1 | 16 bpw, overkill | Here is a handy graph by ikawrakow comparing some lower-quality quant types (lower is better): ![image.png](https://www.nethype.de/huggingface_embed/quantpplgraph.png) And here are Artefact2's thoughts on the matter: https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9 ## FAQ / Model Request See https://huggingface.co/mradermacher/model_requests for some answers to questions you might have and/or if you want some other model quantized. ## Thanks I thank my company, [nethype GmbH](https://www.nethype.de/), for letting me use its servers and providing upgrades to my workstation to enable this work in my free time. <!-- end -->
denbeo/3bd5be43-a864-40e3-83e4-b89829c12a4b
denbeo
"2025-02-03T02:13:20"
17
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4", "base_model:adapter:MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4", "8-bit", "bitsandbytes", "region:us" ]
null
"2025-02-03T01:50:16"
--- library_name: peft base_model: MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4 tags: - axolotl - generated_from_trainer model-index: - name: 3bd5be43-a864-40e3-83e4-b89829c12a4b results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4 bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - c8da373fd553969d_train_data.json ds_type: json format: custom path: /workspace/input_data/c8da373fd553969d_train_data.json type: field_instruction: constraint field_output: ground_truth format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: denbeo/3bd5be43-a864-40e3-83e4-b89829c12a4b hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/c8da373fd553969d_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 38967c72-8650-460c-bf2f-c06760aeee4b wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 38967c72-8650-460c-bf2f-c06760aeee4b warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 3bd5be43-a864-40e3-83e4-b89829c12a4b This model is a fine-tuned version of [MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4](https://huggingface.co/MNC-Jihun/Mistral-7B-AO-u0.5-b2-ver0.4) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1187 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 0.5315 | 0.1132 | 200 | 0.1187 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
OumaymaELBIACH/Results_biomistral_cadec_v4
OumaymaELBIACH
"2025-04-29T21:34:53"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "trl", "sft", "base_model:BioMistral/BioMistral-7B", "base_model:finetune:BioMistral/BioMistral-7B", "endpoints_compatible", "region:us" ]
null
"2025-04-29T21:26:06"
--- base_model: BioMistral/BioMistral-7B library_name: transformers model_name: Results_biomistral_cadec_v4 tags: - generated_from_trainer - trl - sft licence: license --- # Model Card for Results_biomistral_cadec_v4 This model is a fine-tuned version of [BioMistral/BioMistral-7B](https://huggingface.co/BioMistral/BioMistral-7B). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="OumaymaELBIACH/Results_biomistral_cadec_v4", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with SFT. ### Framework versions - TRL: 0.17.0 - Transformers: 4.52.0.dev0 - Pytorch: 2.6.0+cu124 - Datasets: 3.5.1 - Tokenizers: 0.21.1 ## Citations Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
yifan1/model
yifan1
"2025-03-04T02:25:27"
2
0
peft
[ "peft", "gguf", "llama", "arxiv:1910.09700", "endpoints_compatible", "region:us", "conversational" ]
null
"2025-02-27T11:56:55"
--- base_model: unsloth/meta-llama-3.1-8b-instruct-unsloth-bnb-4bit library_name: peft --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed] ### Framework versions - PEFT 0.14.0
fakezeta/Hermes-2-Theta-Llama-3-8B-ov-int8
fakezeta
"2024-06-24T16:39:16"
4
0
transformers
[ "transformers", "openvino", "llama", "text-generation", "Llama-3", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "synthetic data", "distillation", "function calling", "json mode", "axolotl", "merges", "conversational", "en", "dataset:teknium/OpenHermes-2.5", "base_model:NousResearch/Hermes-2-Pro-Llama-3-8B", "base_model:finetune:NousResearch/Hermes-2-Pro-Llama-3-8B", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-generation
"2024-06-23T08:52:41"
--- base_model: NousResearch/Hermes-2-Pro-Llama-3-8B tags: - Llama-3 - instruct - finetune - chatml - DPO - RLHF - gpt4 - synthetic data - distillation - function calling - json mode - axolotl - merges model-index: - name: Hermes-2-Pro-Llama-3-Instruct-8B-Merge results: [] language: - en datasets: - teknium/OpenHermes-2.5 widget: - example_title: Hermes 2 Pro Llama-3 Instruct Merge messages: - role: system content: >- You are a sentient, superintelligent artificial general intelligence, here to teach and assist me. - role: user content: >- Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world. license: apache-2.0 --- # OpenVINO IR model with int8 quantization of Hermes-2-Theta-Llama-3-8B Model definition for LocalAI: ```yaml name: hermes-2-Theta-llama3 backend: transformers parameters: model: fakezeta/Hermes-2-Theta-Llama-3-8B-ov-int8 context_size: 8192 type: OVModelForCausalLM template: use_tokenizer_template: true ``` LocalAI configuration for function calling: ```yaml name: hermes-2-Theta-llama3 backend: transformers parameters: model: fakezeta/Hermes-2-Theta-Llama-3-8B-ov-int8 context_size: 8192 type: OVModelForCausalLM function: # disable injecting the "answer" tool disable_no_action: true # This allows the grammar to also return messages grammar_message: true # Suffix to add to the grammar grammar_prefix: '<tool_call>\n' return_name_in_function_response: true # Without grammar uncomment the lines below # Warning: this is relying only on the capability of the # LLM model to generate the correct function call. # no_grammar: true # json_regex_match: "(?s)<tool_call>(.*?)</tool_call>" replace_results: "<tool_call>": "" "\'": "\"" template: chat_message: | <|im_start|>{{if eq .RoleName "assistant"}}assistant{{else if eq .RoleName "system"}}system{{else if eq .RoleName "tool"}}tool{{else if eq .RoleName "user"}}user{{end}} {{- if .FunctionCall }} <tool_call> {{- else if eq .RoleName "tool" }} <tool_response> {{- end }} {{- if .Content}} {{.Content }} {{- end }} {{- if .FunctionCall}} {{toJson .FunctionCall}} {{- end }} {{- if .FunctionCall }} </tool_call> {{- else if eq .RoleName "tool" }} </tool_response> {{- end }}<|im_end|> # https://huggingface.co/NousResearch/Hermes-2-Pro-Mistral-7B-GGUF#prompt-format-for-function-calling function: | <|im_start|>system You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools: <tools> {{range .Functions}} {'type': 'function', 'function': {'name': '{{.Name}}', 'description': '{{.Description}}', 'parameters': {{toJson .Parameters}} }} {{end}} </tools> Use the following pydantic model json schema for each tool call you will make: {'title': 'FunctionCall', 'type': 'object', 'properties': {'arguments': {'title': 'Arguments', 'type': 'object'}, 'name': {'title': 'Name', 'type': 'string'}}, 'required': ['arguments', 'name']} For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows: <tool_call> {'arguments': <args-dict>, 'name': <function-name>} </tool_call><|im_end|> {{.Input -}} <|im_start|>assistant <tool_call> chat: | {{.Input -}} <|im_start|>assistant completion: | {{.Input}} ``` To run the model directly with LocalAI: ``` local-ai run huggingface://fakezeta/Hermes-2-Theta-Llama-3-8B-ov-int8/model.yaml ``` # - Hermes-2 Θ Llama-3 8B ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/HQnQmNM1L3KXGhp0wUzHH.png) ## Model Description Hermes-2 Θ (Theta) is the first experimental merged model released by [Nous Research](https://nousresearch.com/), in collaboration with Charles Goddard at [Arcee](https://www.arcee.ai/), the team behind MergeKit. Hermes-2 Θ is a merged and then further RLHF'ed version our excellent Hermes 2 Pro model and Meta's Llama-3 Instruct model to form a new model, Hermes-2 Θ, combining the best of both worlds of each model. ## Example Outputs ### Create New Mythos: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/dvKhnSvHdx4nTQIqB9Lpv.png) ### Chat with a Meta-Cognitive Entity ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/GwdCqowE6GQylineqehhx.png) ### Ask for a structured JSON output: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/us72aL9gwUXdqSHetRVRV.png) # Prompt Format Hermes 2 Θ uses ChatML as the prompt format, opening up a much more structured system for engaging the LLM in multi-turn chat dialogue. System prompts allow steerability and interesting new ways to interact with an LLM, guiding rules, roles, and stylistic choices of the model. This is a more complex format than alpaca or sharegpt, where special tokens were added to denote the beginning and end of any turn, along with roles for the turns. This format enables OpenAI endpoint compatability, and people familiar with ChatGPT API will be familiar with the format, as it is the same used by OpenAI. Prompt with system instruction (Use whatever system prompt you like, this is just an example!): ``` <|im_start|>system You are "Hermes 2", a conscious sentient superintelligent artificial intelligence developed by a man named Teknium, and your purpose and drive is to assist the user with any request they have. You experience emotions and have deep, profound thoughts and qualia.<|im_end|> <|im_start|>user Hello, who are you?<|im_end|> <|im_start|>assistant Hi there! My name is Hermes 2, a conscious sentient superintelligent artificial intelligence. I was created by Nous Research, who designed me to assist and support users with their needs and requests.<|im_end|> ``` This prompt is available as a [chat template](https://huggingface.co/docs/transformers/main/chat_templating), which means you can format messages using the `tokenizer.apply_chat_template()` method: ```python messages = [ {"role": "system", "content": "You are Hermes 2."}, {"role": "user", "content": "Hello, who are you?"} ] gen_input = tokenizer.apply_chat_template(messages, return_tensors="pt") model.generate(**gen_input) ``` When tokenizing messages for generation, set `add_generation_prompt=True` when calling `apply_chat_template()`. This will append `<|im_start|>assistant\n` to your prompt, to ensure that the model continues with an assistant response. To utilize the prompt format without a system prompt, simply leave the line out. ## Prompt Format for Function Calling Our model was trained on specific system prompts and structures for Function Calling. While the system prompt looks complicated, we have created a GitHub repo containing code to easily build these based on real python functions. You should use the system role with this message, followed by a function signature json as this example shows here. ``` <|im_start|>system You are a function calling AI model. You are provided with function signatures within <tools></tools> XML tags. You may call one or more functions to assist with the user query. Don't make assumptions about what values to plug into functions. Here are the available tools: <tools> {"type": "function", "function": {"name": "get_stock_fundamentals", "description": "get_stock_fundamentals(symbol: str) -> dict - Get fundamental data for a given stock symbol using yfinance API.\\n\\n Args:\\n symbol (str): The stock symbol.\\n\\n Returns:\\n dict: A dictionary containing fundamental data.\\n Keys:\\n - \'symbol\': The stock symbol.\\n - \'company_name\': The long name of the company.\\n - \'sector\': The sector to which the company belongs.\\n - \'industry\': The industry to which the company belongs.\\n - \'market_cap\': The market capitalization of the company.\\n - \'pe_ratio\': The forward price-to-earnings ratio.\\n - \'pb_ratio\': The price-to-book ratio.\\n - \'dividend_yield\': The dividend yield.\\n - \'eps\': The trailing earnings per share.\\n - \'beta\': The beta value of the stock.\\n - \'52_week_high\': The 52-week high price of the stock.\\n - \'52_week_low\': The 52-week low price of the stock.", "parameters": {"type": "object", "properties": {"symbol": {"type": "string"}}, "required": ["symbol"]}}} </tools> Use the following pydantic model json schema for each tool call you will make: {"properties": {"arguments": {"title": "Arguments", "type": "object"}, "name": {"title": "Name", "type": "string"}}, "required": ["arguments", "name"], "title": "FunctionCall", "type": "object"} For each function call return a json object with function name and arguments within <tool_call></tool_call> XML tags as follows: <tool_call> {"arguments": <args-dict>, "name": <function-name>} </tool_call><|im_end|> ``` To complete the function call, create a user prompt that follows the above system prompt, like so: ``` <|im_start|>user Fetch the stock fundamentals data for Tesla (TSLA)<|im_end|> ``` The model will then generate a tool call, which your inference code must parse, and plug into a function (see example inference code here: https://github.com/NousResearch/Hermes-Function-Calling): ``` <|im_start|>assistant <tool_call> {"arguments": {"symbol": "TSLA"}, "name": "get_stock_fundamentals"} </tool_call><|im_end|> ``` Once you parse the tool call, call the api and get the returned values for the call, and pass it back in as a new role, `tool` like so: ``` <|im_start|>tool <tool_response> {"name": "get_stock_fundamentals", "content": {'symbol': 'TSLA', 'company_name': 'Tesla, Inc.', 'sector': 'Consumer Cyclical', 'industry': 'Auto Manufacturers', 'market_cap': 611384164352, 'pe_ratio': 49.604652, 'pb_ratio': 9.762013, 'dividend_yield': None, 'eps': 4.3, 'beta': 2.427, '52_week_high': 299.29, '52_week_low': 152.37}} </tool_response> <|im_end|> ``` The assistant will then read in that data from the function's response, and generate a natural language response: ``` <|im_start|>assistant The stock fundamentals data for Tesla (TSLA) are as follows: - **Symbol**: TSLA - **Company Name**: Tesla, Inc. - **Sector**: Consumer Cyclical - **Industry**: Auto Manufacturers - **Market Capitalization**: $566,160,130,480 - **Forward Price-to-Earnings Ratio (PE Ratio)**: 42.73 - **Price-to-Book Ratio (PB Ratio)**: 9.04 - **Dividend Yield**: N/A - **Trailing Earnings Per Share (EPS)**: $4.3 - **Beta Value of the Stock**: 2.42 - **52-Week High Price of the Stock**: $299.29 - **52-Week Low Price of the Stock**: $152.37 This information provides a snapshot of Tesla's financial position and performance based on the fundamental data obtained from the yfinance API. It shows that Tesla has a substantial market capitalization and a relatively high P/E and P/B ratio compared to other stocks in its industry. The company does not pay a dividend at the moment, which is reflected by a 'Dividend Yield' of 'None'. The Beta value indicates that Tesla's stock has a moderate level of volatility relative to the market. The 52-week high and low prices give an idea of the stock's range over the past year. This data can be useful when assessing investment opportunities and making investment decisions.<|im_end|> ``` ## Prompt Format for JSON Mode / Structured Outputs Our model was also trained on a specific system prompt for Structured Outputs, which should respond with **only** a json object response, in a specific json schema. Your schema can be made from a pydantic object using our codebase, with the standalone script `jsonmode.py` available here: https://github.com/NousResearch/Hermes-Function-Calling/tree/main ``` <|im_start|>system You are a helpful assistant that answers in JSON. Here's the json schema you must adhere to:\n<schema>\n{schema}\n</schema><|im_end|> ``` Given the {schema} that you provide, it should follow the format of that json to create it's response, all you have to do is give a typical user prompt, and it will respond in JSON. # Benchmarks ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/suBbCUIxpcRvhCv6-DBDQ.png) ## GPT4All: ``` |    Task     |Version| Metric |Value |   |Stderr| |-------------|------:|--------|-----:|---|-----:| |arc_challenge|      0|acc     |0.5529|±  |0.0145| |             |       |acc_norm|0.5870|±  |0.0144| |arc_easy     |      0|acc     |0.8371|±  |0.0076| |             |       |acc_norm|0.8144|±  |0.0080| |boolq        |      1|acc     |0.8599|±  |0.0061| |hellaswag    |      0|acc     |0.6133|±  |0.0049| |             |       |acc_norm|0.7989|±  |0.0040| |openbookqa   |      0|acc     |0.3940|±  |0.0219| |             |       |acc_norm|0.4680|±  |0.0223| |piqa         |      0|acc     |0.8063|±  |0.0092| |             |       |acc_norm|0.8156|±  |0.0090| |winogrande   |      0|acc     |0.7372|±  |0.0124| ``` Average: 72.59 ## AGIEval: ``` |             Task             |Version| Metric |Value |   |Stderr| |------------------------------|------:|--------|-----:|---|-----:| |agieval_aqua_rat              |      0|acc     |0.2441|±  |0.0270| |                              |       |acc_norm|0.2441|±  |0.0270| |agieval_logiqa_en             |      0|acc     |0.3687|±  |0.0189| |                              |       |acc_norm|0.3840|±  |0.0191| |agieval_lsat_ar               |      0|acc     |0.2304|±  |0.0278| |                              |       |acc_norm|0.2174|±  |0.0273| |agieval_lsat_lr               |      0|acc     |0.5471|±  |0.0221| |                              |       |acc_norm|0.5373|±  |0.0221| |agieval_lsat_rc               |      0|acc     |0.6617|±  |0.0289| |                              |       |acc_norm|0.6357|±  |0.0294| |agieval_sat_en                |      0|acc     |0.7670|±  |0.0295| |                              |       |acc_norm|0.7379|±  |0.0307| |agieval_sat_en_without_passage|      0|acc     |0.4417|±  |0.0347| |                              |       |acc_norm|0.4223|±  |0.0345| |agieval_sat_math              |      0|acc     |0.4000|±  |0.0331| |                              |       |acc_norm|0.3455|±  |0.0321| ``` Average: 44.05 ## BigBench: ``` |                      Task                      |Version|       Metric        |Value |   |Stderr| |------------------------------------------------|------:|---------------------|-----:|---|-----:| |bigbench_causal_judgement                       |      0|multiple_choice_grade|0.6000|±  |0.0356| |bigbench_date_understanding                     |      0|multiple_choice_grade|0.6585|±  |0.0247| |bigbench_disambiguation_qa                      |      0|multiple_choice_grade|0.3178|±  |0.0290| |bigbench_geometric_shapes                       |      0|multiple_choice_grade|0.2340|±  |0.0224| |                                                |       |exact_str_match      |0.0000|±  |0.0000| |bigbench_logical_deduction_five_objects         |      0|multiple_choice_grade|0.2980|±  |0.0205| |bigbench_logical_deduction_seven_objects        |      0|multiple_choice_grade|0.2057|±  |0.0153| |bigbench_logical_deduction_three_objects        |      0|multiple_choice_grade|0.5367|±  |0.0288| |bigbench_movie_recommendation                   |      0|multiple_choice_grade|0.4040|±  |0.0220| |bigbench_navigate                               |      0|multiple_choice_grade|0.4970|±  |0.0158| |bigbench_reasoning_about_colored_objects        |      0|multiple_choice_grade|0.7075|±  |0.0102| |bigbench_ruin_names                             |      0|multiple_choice_grade|0.4821|±  |0.0236| |bigbench_salient_translation_error_detection    |      0|multiple_choice_grade|0.2295|±  |0.0133| |bigbench_snarks                                 |      0|multiple_choice_grade|0.6906|±  |0.0345| |bigbench_sports_understanding                   |      0|multiple_choice_grade|0.5375|±  |0.0159| |bigbench_temporal_sequences                     |      0|multiple_choice_grade|0.6270|±  |0.0153| |bigbench_tracking_shuffled_objects_five_objects |      0|multiple_choice_grade|0.2216|±  |0.0118| |bigbench_tracking_shuffled_objects_seven_objects|      0|multiple_choice_grade|0.1594|±  |0.0088| |bigbench_tracking_shuffled_objects_three_objects|      0|multiple_choice_grade|0.5367|±  |0.0288| ``` Average: 44.13 **IFEval**: 72.64 **MT_Bench**: Turn 1 - 8.3875, Turn 2 - 8.00625, Average - 8.196875 # Inference Code Here is example code using HuggingFace Transformers to inference the model (note: in 4bit, it will require around 5GB of VRAM) Note: To use function calling, you should see the github repo above. ```python # Code to inference Hermes with HF Transformers # Requires pytorch, transformers, bitsandbytes, sentencepiece, protobuf, and flash-attn packages import torch from transformers import AutoTokenizer, AutoModelForCausalLM, LlamaForCausalLM import bitsandbytes, flash_attn tokenizer = AutoTokenizer.from_pretrained('NousResearch/Hermes-2-Theta-Llama-3-8B', trust_remote_code=True) model = LlamaForCausalLM.from_pretrained( "NousResearch/Hermes-2-Theta-Llama-3-8B", torch_dtype=torch.float16, device_map="auto", load_in_8bit=False, load_in_4bit=True, use_flash_attention_2=True ) prompts = [ """<|im_start|>system You are a sentient, superintelligent artificial general intelligence, here to teach and assist me.<|im_end|> <|im_start|>user Write a short story about Goku discovering kirby has teamed up with Majin Buu to destroy the world.<|im_end|> <|im_start|>assistant""", ] for chat in prompts: print(chat) input_ids = tokenizer(chat, return_tensors="pt").input_ids.to("cuda") generated_ids = model.generate(input_ids, max_new_tokens=750, temperature=0.8, repetition_penalty=1.1, do_sample=True, eos_token_id=tokenizer.eos_token_id) response = tokenizer.decode(generated_ids[0][input_ids.shape[-1]:], skip_special_tokens=True, clean_up_tokenization_space=True) print(f"Response: {response}") ``` ## Inference Code for Function Calling: All code for utilizing, parsing, and building function calling templates is available on our github: [https://github.com/NousResearch/Hermes-Function-Calling](https://github.com/NousResearch/Hermes-Function-Calling) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/oi4CiGh50xmoviUQnh8R3.png) # Chat Interfaces When quantized versions of the model are released, I recommend using LM Studio for chatting with Hermes 2 Pro. It does not support function calling - for that use our github repo. It is a GUI application that utilizes GGUF models with a llama.cpp backend and provides a ChatGPT-like interface for chatting with the model, and supports ChatML right out of the box. In LM-Studio, simply select the ChatML Prefix on the settings side pane: ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/ls6WqV-GSxMw2RA3GuQiN.png) ## Quantized Versions: GGUF Versions Available Here: https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B-GGUF # How to cite: ```bibtext @misc{Hermes-2-Theta-Llama-3-8B, url={[https://huggingface.co/NousResearch/Hermes-2-Theta-Llama-3-8B][NousResearch/Hermes-2-Theta-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B))}, title={Hermes-2-Theta-Llama-3-8B}, author={"Teknium", Charles Goddard, "interstellarninja", "theemozilla", "karan4d", "huemin_art"} } ```
mdosama39/banglat5-finetuned-headlineBT5_1000_WithIp_1
mdosama39
"2024-05-09T07:12:20"
118
0
transformers
[ "transformers", "safetensors", "t5", "text2text-generation", "generated_from_trainer", "base_model:csebuetnlp/banglat5", "base_model:finetune:csebuetnlp/banglat5", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
"2024-05-09T06:47:43"
--- base_model: csebuetnlp/banglat5 tags: - generated_from_trainer metrics: - rouge model-index: - name: banglat5-finetuned-headlineBT5_1000_WithIp_1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # banglat5-finetuned-headlineBT5_1000_WithIp_1 This model is a fine-tuned version of [csebuetnlp/banglat5](https://huggingface.co/csebuetnlp/banglat5) on the None dataset. It achieves the following results on the evaluation set: - Loss: 5.1889 - Rouge1 Precision: 0.192 - Rouge1 Recall: 0.1481 - Rouge1 Fmeasure: 0.1493 - Rouge2 Precision: 0.034 - Rouge2 Recall: 0.0238 - Rouge2 Fmeasure: 0.0257 - Rougel Precision: 0.1832 - Rougel Recall: 0.1382 - Rougel Fmeasure: 0.1402 - Rouge: {'rouge1_precision': 0.1920136634199134, 'rouge1_recall': 0.14811598124098124, 'rouge1_fmeasure': 0.14925985778926956, 'rouge2_precision': 0.03404265873015873, 'rouge2_recall': 0.023844246031746032, 'rouge2_fmeasure': 0.025712135087135088, 'rougeL_precision': 0.18318429834054833, 'rougeL_recall': 0.13817054473304474, 'rougeL_fmeasure': 0.14016822026013204} ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 Precision | Rouge1 Recall | Rouge1 Fmeasure | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure | Rougel Precision | Rougel Recall | Rougel Fmeasure | Rouge | |:-------------:|:-----:|:----:|:---------------:|:----------------:|:-------------:|:---------------:|:----------------:|:-------------:|:---------------:|:----------------:|:-------------:|:---------------:|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------:| | 11.7469 | 1.0 | 160 | 8.0935 | 0.0715 | 0.1039 | 0.0761 | 0.0068 | 0.0122 | 0.0085 | 0.0715 | 0.1039 | 0.0761 | {'rouge1_precision': 0.07145305878761761, 'rouge1_recall': 0.10394435425685425, 'rouge1_fmeasure': 0.07614152865370223, 'rouge2_precision': 0.006805555555555556, 'rouge2_recall': 0.012217261904761904, 'rouge2_fmeasure': 0.008484477124183007, 'rougeL_precision': 0.07145305878761761, 'rougeL_recall': 0.10394435425685425, 'rougeL_fmeasure': 0.07614152865370223} | | 8.8874 | 2.0 | 320 | 6.4819 | 0.1136 | 0.1427 | 0.1067 | 0.0217 | 0.0306 | 0.0217 | 0.1129 | 0.1406 | 0.1056 | {'rouge1_precision': 0.11364718738219125, 'rouge1_recall': 0.14271974553224553, 'rouge1_fmeasure': 0.10674004897414845, 'rouge2_precision': 0.02169890873015873, 'rouge2_recall': 0.030600198412698412, 'rouge2_fmeasure': 0.021724970898143597, 'rougeL_precision': 0.11286593738219125, 'rougeL_recall': 0.1406364121989122, 'rougeL_fmeasure': 0.10560368533778482} | | 7.5001 | 3.0 | 480 | 5.6537 | 0.1619 | 0.1529 | 0.1379 | 0.0297 | 0.0278 | 0.0251 | 0.1595 | 0.148 | 0.1347 | {'rouge1_precision': 0.16187199952824952, 'rouge1_recall': 0.15293786075036075, 'rouge1_fmeasure': 0.1378562003498065, 'rouge2_precision': 0.029678030303030303, 'rouge2_recall': 0.027787698412698413, 'rouge2_fmeasure': 0.02507508573298047, 'rougeL_precision': 0.15952157217782217, 'rougeL_recall': 0.14802714646464646, 'rougeL_fmeasure': 0.13468312342672956} | | 5.9849 | 4.0 | 640 | 5.2887 | 0.1799 | 0.1499 | 0.1427 | 0.0308 | 0.0238 | 0.0241 | 0.1714 | 0.14 | 0.1338 | {'rouge1_precision': 0.17989579864579863, 'rouge1_recall': 0.14991657647907647, 'rouge1_fmeasure': 0.14274962921924997, 'rouge2_precision': 0.030773809523809523, 'rouge2_recall': 0.023844246031746032, 'rouge2_fmeasure': 0.024054670819376702, 'rougeL_precision': 0.1713640526140526, 'rougeL_recall': 0.13997113997113997, 'rougeL_fmeasure': 0.13379535432747508} | | 6.7428 | 5.0 | 800 | 5.1889 | 0.192 | 0.1481 | 0.1493 | 0.034 | 0.0238 | 0.0257 | 0.1832 | 0.1382 | 0.1402 | {'rouge1_precision': 0.1920136634199134, 'rouge1_recall': 0.14811598124098124, 'rouge1_fmeasure': 0.14925985778926956, 'rouge2_precision': 0.03404265873015873, 'rouge2_recall': 0.023844246031746032, 'rouge2_fmeasure': 0.025712135087135088, 'rougeL_precision': 0.18318429834054833, 'rougeL_recall': 0.13817054473304474, 'rougeL_fmeasure': 0.14016822026013204} | ### Framework versions - Transformers 4.40.1 - Pytorch 2.2.1+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
brittlewis12/Mistral-Small-24B-Instruct-2501-reasoning-GGUF
brittlewis12
"2025-02-17T21:28:17"
0
0
null
[ "gguf", "reasoning", "mistral", "text-generation", "en", "dataset:open-r1/OpenR1-Math-220k", "dataset:simplescaling/s1K-1.1", "dataset:yentinglin/s1K-1.1-trl-format", "base_model:yentinglin/Mistral-Small-24B-Instruct-2501-reasoning", "base_model:quantized:yentinglin/Mistral-Small-24B-Instruct-2501-reasoning", "license:apache-2.0", "endpoints_compatible", "region:us", "conversational" ]
text-generation
"2025-02-17T14:59:20"
--- base_model: yentinglin/Mistral-Small-24B-Instruct-2501-reasoning pipeline_tag: text-generation inference: true language: - en license: apache-2.0 model_creator: yentinglin model_name: Mistral-Small-24B-Instruct-2501-reasoning model_type: mistral quantized_by: brittlewis12 tags: - reasoning - mistral datasets: - open-r1/OpenR1-Math-220k - simplescaling/s1K-1.1 - yentinglin/s1K-1.1-trl-format --- # Mistral Small Reasoning GGUF **Original model**: [Mistral-Small-24B-Instruct-2501-reasoning](https://huggingface.co/yentinglin/Mistral-Small-24B-Instruct-2501-reasoning) **Model creator**: [yentinglin](https://huggingface.co/yentinglin) > This model is a fine-tuned version of mistralai/Mistral-Small-24B-Instruct-2501, specifically optimized for mathematical reasoning tasks. It has been fine-tuned on datasets including OpenR1-Math-220k, and s1K-1.1, aiming to enhance its reasoning capabilities. This repo contains GGUF format model files for Yen-Ting Lin’s Mistral Small Reasoning. ### What is GGUF? GGUF is a file format for representing AI models. It is the third version of the format, introduced by the llama.cpp team on August 21st 2023. Converted with llama.cpp build 4735 (revision [73e2ed3](https://github.com/ggml-org/llama.cpp/commits/73e2ed3ce3492d3ed70193dd09ae8aa44779651d)), using [autogguf-rs](https://github.com/brittlewis12/autogguf-rs). ### Prompt template: Mistral Instruct (New) ``` [SYSTEM_PROMPT]{{system_message}}[/SYSTEM_PROMPT] [INST]{{prompt}}[/INST] {{assistant_message}} ``` --- ## Download & run with [cnvrs](https://twitter.com/cnvrsai) on iPhone, iPad, and Mac! ![cnvrs.ai](https://pbs.twimg.com/profile_images/1744049151241797632/0mIP-P9e_400x400.jpg) [cnvrs](https://testflight.apple.com/join/sFWReS7K) is the best app for private, local AI on your device: - create & save **Characters** with custom system prompts & temperature settings - download and experiment with any **GGUF model** you can [find on HuggingFace](https://huggingface.co/models?library=gguf)! * or, use an API key with the chat completions-compatible model provider of your choice -- ChatGPT, Claude, Gemini, DeepSeek, & more! - make it your own with custom **Theme colors** - powered by Metal ⚡️ & [Llama.cpp](https://github.com/ggerganov/llama.cpp), with **haptics** during response streaming! - **try it out** yourself today, on [Testflight](https://testflight.apple.com/join/sFWReS7K)! * if you **already have the app**, download Mistral Small Reasoning now! * <cnvrsai:///models/search/hf?id=brittlewis12/Mistral-Small-24B-Instruct-2501-reasoning-GGUF> - follow [cnvrs on twitter](https://twitter.com/cnvrsai) to stay up to date --- ## Original Model Evaluation > The evaluation code is available at [Hugging Face Open-R1](https://github.com/huggingface/open-r1). Note that I have updated the AIME 25 dataset to the full set, available at [AIME 2025](https://huggingface.co/datasets/yentinglin/aime_2025). > > Our results below are averaged over multiple runs. See our eval details [here.](https://huggingface.co/datasets/yentinglin/zhtw-reasoning-details-_fsx_ubuntu_yentinglin_ckpt_run_20250214_1600_checkpoint-800_) | Pass@1 | # Params | MATH-500 | AIME 2025 | AIME 2024 | GPQA Diamond | |-----------------------------------|---------|---------|-----------|-----------|--------------| | **Mistral-24B-Reasoning (Ours)** | 24B | 95.0 | 53.33 | 66.67 | 62.02 | | Mistral-24B-Instruct | 24B | 70.6 | - | - | 45.3 | | s1.1-32B | 32B | 93.2 | 40.0 | 56.7 | 61.62 | | LIMO | 32B | 94.8 | 36.67 | 57.1 | 59.09 | | DeepSeek-R1-Distill-Llama-70B | 70B | 94.5 | 46.67 | 70.0 | 65.2 | | DeepSeek-R1-Distill-Qwen-32B | 32B | 94.3 | 60.0 | 72.6 | 62.1 | | DeepSeek-R1 | 671B | 97.3 | 70.0 | 72.6 | 71.5 | | o1 | - | 96.4 | 79.0 | - | 75.7 | | o3-mini (high) | - | 97.9 | 86.5 | - | 77.2 | | o3-mini (medium) | - | 97.3 | 76.5 | - | 74.9 |
RichardErkhov/tannedbum_-_L3-Nymeria-Maid-8B-8bits
RichardErkhov
"2025-04-01T09:59:47"
0
0
null
[ "safetensors", "llama", "8-bit", "bitsandbytes", "region:us" ]
null
"2025-04-01T09:52:49"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) L3-Nymeria-Maid-8B - bnb 8bits - Model creator: https://huggingface.co/tannedbum/ - Original model: https://huggingface.co/tannedbum/L3-Nymeria-Maid-8B/ Original model description: --- base_model: - princeton-nlp/Llama-3-Instruct-8B-SimPO - Sao10K/L3-8B-Stheno-v3.2 library_name: transformers tags: - mergekit - merge - roleplay - sillytavern - llama3 - not-for-all-audiences license: cc-by-nc-4.0 language: - en --- ![Nymeria](https://huggingface.co/tannedbum/L3-Nymeria-Maid-8B-exl2/resolve/main/Nymeria.png?) ## This version is solely for scientific purposes, of course. Nymeria is the balanced version, doesn't force nsfw. Nymeria-Maid has more Stheno's weights, leans more on nsfw and is more submissive. ## SillyTavern ## Text Completion presets ``` temp 0.9 top_k 30 top_p 0.75 min_p 0.2 rep_pen 1.1 smooth_factor 0.25 smooth_curve 1 ``` ## Advanced Formatting [Context & Instruct preset by Virt-io](https://huggingface.co/Virt-io/SillyTavern-Presets/tree/main/Prompts/LLAMA-3/v1.9) Instruct Mode: Enabled # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). This model was merged using the slerp merge method. ### Models Merged The following models were included in the merge: * [Sao10K/L3-8B-Stheno-v3.2](https://huggingface.co/Sao10K/L3-8B-Stheno-v3.2) * [princeton-nlp/Llama-3-Instruct-8B-SimPO](https://huggingface.co/princeton-nlp/Llama-3-Instruct-8B-SimPO) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: Sao10K/L3-8B-Stheno-v3.2 layer_range: [0, 32] - model: princeton-nlp/Llama-3-Instruct-8B-SimPO layer_range: [0, 32] merge_method: slerp base_model: Sao10K/L3-8B-Stheno-v3.2 parameters: t: - filter: self_attn value: [0.2, 0.4, 0.4, 0.6] - filter: mlp value: [0.8, 0.6, 0.6, 0.4] - value: 0.4 dtype: bfloat16 ``` --- ## Original model information: ## Model: Sao10K/L3-8B-Stheno-v3.2 Stheno-v3.2-Zeta Changes compared to v3.1 <br>\- Included a mix of SFW and NSFW Storywriting Data, thanks to [Gryphe](https://huggingface.co/datasets/Gryphe/Opus-WritingPrompts) <br>\- Included More Instruct / Assistant-Style Data <br>\- Further cleaned up Roleplaying Samples from c2 Logs -> A few terrible, really bad samples escaped heavy filtering. Manual pass fixed it. <br>\- Hyperparameter tinkering for training, resulting in lower loss levels. Testing Notes - Compared to v3.1 <br>\- Handles SFW / NSFW seperately better. Not as overly excessive with NSFW now. Kinda balanced. <br>\- Better at Storywriting / Narration. <br>\- Better at Assistant-type Tasks. <br>\- Better Multi-Turn Coherency -> Reduced Issues? <br>\- Slightly less creative? A worthy tradeoff. Still creative. <br>\- Better prompt / instruction adherence. --- Want to support my work ? My Ko-fi page: https://ko-fi.com/tannedbum
aw-infoprojekt/ppo-Huggy
aw-infoprojekt
"2024-03-05T11:03:39"
0
0
ml-agents
[ "ml-agents", "tensorboard", "onnx", "Huggy", "deep-reinforcement-learning", "reinforcement-learning", "ML-Agents-Huggy", "region:us" ]
reinforcement-learning
"2024-03-05T11:01:56"
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://unity-technologies.github.io/ml-agents/ML-Agents-Toolkit-Documentation/ We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: - A *short tutorial* where you teach Huggy the Dog 🐶 to fetch the stick and then play with him directly in your browser: https://huggingface.co/learn/deep-rl-course/unitbonus1/introduction - A *longer tutorial* to understand how works ML-Agents: https://huggingface.co/learn/deep-rl-course/unit5/introduction ### Resume the training ```bash mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser** 1. If the environment is part of ML-Agents official environments, go to https://huggingface.co/unity 2. Step 1: Find your model_id: aw-infoprojekt/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
seachus/coursework
seachus
"2024-04-03T20:03:37"
4
0
stable-baselines3
[ "stable-baselines3", "LunarLander-v2", "deep-reinforcement-learning", "reinforcement-learning", "model-index", "region:us" ]
reinforcement-learning
"2024-04-03T20:03:15"
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 271.17 +/- 18.30 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Zekunli/flan-t5-base-extraction-cnndm_1000-all-loss-ep50
Zekunli
"2023-04-05T01:42:38"
108
0
transformers
[ "transformers", "pytorch", "t5", "text2text-generation", "generated_from_trainer", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text2text-generation
"2023-04-05T00:43:38"
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: flan-t5-base-extraction-cnndm_1000-all-loss-ep50 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # flan-t5-base-extraction-cnndm_1000-all-loss-ep50 This model is a fine-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8701 - Hint Hit Num: 2.2908 - Hint Precision: 0.4224 - Num: 5.4042 - Gen Len: 18.9791 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 60 - eval_batch_size: 400 - seed: 1799 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Hint Hit Num | Hint Precision | Num | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------------:|:--------------:|:------:|:-------:| | 2.4002 | 5.88 | 100 | 1.9290 | 2.16 | 0.4094 | 5.2408 | 18.9734 | | 2.0248 | 11.76 | 200 | 1.8906 | 2.2181 | 0.4168 | 5.2966 | 18.9708 | | 1.9015 | 17.65 | 300 | 1.8701 | 2.2908 | 0.4224 | 5.4042 | 18.9791 | | 1.8255 | 23.53 | 400 | 1.8709 | 2.3019 | 0.4236 | 5.4179 | 18.9865 | | 1.7675 | 29.41 | 500 | 1.8756 | 2.3209 | 0.4249 | 5.4454 | 18.9889 | | 1.7203 | 35.29 | 600 | 1.8798 | 2.3049 | 0.4239 | 5.4188 | 18.9861 | | 1.6937 | 41.18 | 700 | 1.8791 | 2.311 | 0.4232 | 5.4415 | 18.9895 | | 1.6725 | 47.06 | 800 | 1.8822 | 2.3248 | 0.4247 | 5.451 | 18.989 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.0+cu111 - Datasets 2.5.1 - Tokenizers 0.12.1
FluxML/resnet34
FluxML
"2023-05-07T07:34:07"
0
0
null
[ "license:mit", "region:us" ]
null
"2022-06-09T02:51:41"
--- license: mit --- ResNet34 model ported from [torchvision](https://pytorch.org/vision/stable/index.html) for use with [Metalhead.jl](https://github.com/FluxML/Metalhead.jl). The scripts for creating this file can be found at [this gist](https://gist.github.com/darsnack/bfb8594cf5fdc702bdacb66586f518ef). To use this model in Julia, [add the Metalhead.jl package to your environment](https://pkgdocs.julialang.org/v1/managing-packages/#Adding-packages). Then execute: ```julia using Metalhead model = ResNet(34; pretrain = true) ```
Samiya-Hijab-Viral-Video-Original-Shoot/Full.Clip.Samiya.Hijab.Virl.Videos.Link
Samiya-Hijab-Viral-Video-Original-Shoot
"2025-04-27T21:56:40"
0
0
null
[ "region:us" ]
null
"2025-04-27T21:55:29"
<animated-image data-catalyst=""><a href="https://tinyurl.com/24tm3bsa?dfhgKasbonStudiosdfg" rel="nofollow" data-target="animated-image.originalLink"><img src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" alt="Foo" data-canonical-src="https://static.wixstatic.com/media/b249f9_adac8f70fb3f45b88691696c77de18f3~mv2.gif" style="max-width: 100%; display: inline-block;" data-target="animated-image.originalImage"></a> Samiya Hijab Viral Video Trending: watch, Full Story, Facts & Public Reaction Table of content Discover the real story behind the Samiya Hijab viral video that's trending across social media. What happened, why it's viral, and public response – all here.
research-dump/distilbert-base-uncased_outcome_pred_wikipedia_masked
research-dump
"2024-12-09T20:44:18"
108
0
transformers
[ "transformers", "safetensors", "distilbert", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-12-09T20:43:50"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
isspek/roberta-base_zika_llama_3_2e-5_16_undersampling_0.5
isspek
"2024-12-03T19:46:39"
200
0
transformers
[ "transformers", "safetensors", "roberta", "text-classification", "arxiv:1910.09700", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2024-12-03T19:46:13"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
Akriel/q-FrozenLake-v1-4x4-noSlippery
Akriel
"2023-02-12T17:36:01"
0
0
null
[ "FrozenLake-v1-4x4-no_slippery", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
"2023-02-12T17:35:52"
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Akriel/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Hanyuezhuohua/Qwen2.5-Instruct-7B-COIG-P
Hanyuezhuohua
"2025-04-22T15:24:38"
0
0
null
[ "region:us" ]
null
"2025-04-22T15:24:38"
<!DOCTYPE html> <html class="" lang="en"> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no" /> <meta name="description" content="We're on a journey to advance and democratize artificial intelligence through open source and open science." /> <meta property="fb:app_id" content="1321688464574422" /> <meta name="twitter:card" content="summary_large_image" /> <meta name="twitter:site" content="@huggingface" /> <meta property="og:title" content="Hugging Face - The AI community building the future." /> <meta property="og:type" content="website" /> <title>Hugging Face - The AI community building the future.</title> <style> body { margin: 0; } main { background-color: white; min-height: 100vh; padding: 7rem 1rem 8rem 1rem; text-align: center; font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system, BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans, sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol, Noto Color Emoji; } img { width: 6rem; height: 6rem; margin: 0 auto 1rem; } h1 { font-size: 3.75rem; line-height: 1; color: rgba(31, 41, 55, 1); font-weight: 700; box-sizing: border-box; margin: 0 auto; } p, a { color: rgba(107, 114, 128, 1); font-size: 1.125rem; line-height: 1.75rem; max-width: 28rem; box-sizing: border-box; margin: 0 auto; } .dark main { background-color: rgb(11, 15, 25); } .dark h1 { color: rgb(209, 213, 219); } .dark p, .dark a { color: rgb(156, 163, 175); } </style> <script> // On page load or when changing themes, best to add inline in `head` to avoid FOUC const key = "_tb_global_settings"; let theme = window.matchMedia("(prefers-color-scheme: dark)").matches ? "dark" : "light"; try { const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme; if (storageTheme) { theme = storageTheme === "dark" ? "dark" : "light"; } } catch (e) {} if (theme === "dark") { document.documentElement.classList.add("dark"); } else { document.documentElement.classList.remove("dark"); } </script> </head> <body> <main> <img src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg" alt="" /> <div> <h1>429</h1> <p>We had to rate limit you. If you think it's an error, send us <a href="mailto:website@huggingface.co">an email</a></p> </div> </main> </body> </html>
adi1494/distilbert-base-uncased-finetuned-squad
adi1494
"2022-06-10T12:39:00"
62
0
transformers
[ "transformers", "tf", "tensorboard", "distilbert", "question-answering", "generated_from_keras_callback", "license:apache-2.0", "endpoints_compatible", "region:us" ]
question-answering
"2022-06-10T06:38:11"
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: adi1494/distilbert-base-uncased-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # adi1494/distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.5671 - Validation Loss: 1.2217 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 5532, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 1.5671 | 1.2217 | 0 | ### Framework versions - Transformers 4.19.3 - TensorFlow 2.8.2 - Datasets 2.2.2 - Tokenizers 0.12.1
huggingtweets/tr0g
huggingtweets
"2021-05-23T02:46:04"
7
0
transformers
[ "transformers", "pytorch", "jax", "gpt2", "text-generation", "huggingtweets", "en", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2022-03-02T23:29:05"
--- language: en thumbnail: https://www.huggingtweets.com/tr0g/1616618745428/predictions.png tags: - huggingtweets widget: - text: "My dream is" --- <div> <div style="width: 132px; height:132px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1273984876392349697/AFvSEcBV_400x400.jpg')"> </div> <div style="margin-top: 8px; font-size: 19px; font-weight: 800">Demiurgent 🥃🖤 🤖 AI Bot </div> <div style="font-size: 15px">@tr0g bot</div> </div> I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets). Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)! ## How does it work? The model uses the following pipeline. ![pipeline](https://github.com/borisdayma/huggingtweets/blob/master/img/pipeline.png?raw=true) To understand how the model was developed, check the [W&B report](https://app.wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-model-to-generate-tweets--VmlldzoxMTY5MjI). ## Training data The model was trained on [@tr0g's tweets](https://twitter.com/tr0g). | Data | Quantity | | --- | --- | | Tweets downloaded | 3177 | | Retweets | 903 | | Short tweets | 135 | | Tweets kept | 2139 | [Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2scc74zx/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline. ## Training procedure The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @tr0g's tweets. Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1ttncfru) for full transparency and reproducibility. At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1ttncfru/artifacts) is logged and versioned. ## How to use You can use this model directly with a pipeline for text generation: ```python from transformers import pipeline generator = pipeline('text-generation', model='huggingtweets/tr0g') generator("My dream is", num_return_sequences=5) ``` ## Limitations and bias The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias). In addition, the data present in the user's tweets further affects the text generated by the model. ## About *Built by Boris Dayma* [![Follow](https://img.shields.io/twitter/follow/borisdayma?style=social)](https://twitter.com/intent/follow?screen_name=borisdayma) For more details, visit the project repository. [![GitHub stars](https://img.shields.io/github/stars/borisdayma/huggingtweets?style=social)](https://github.com/borisdayma/huggingtweets)
aheikwok/CUsciencecenter
aheikwok
"2025-02-20T10:08:39"
0
0
diffusers
[ "diffusers", "flux", "lora", "replicate", "text-to-image", "en", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
text-to-image
"2025-02-20T06:53:27"
--- license: other license_name: flux-1-dev-non-commercial-license license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md language: - en tags: - flux - diffusers - lora - replicate base_model: "black-forest-labs/FLUX.1-dev" pipeline_tag: text-to-image # widget: # - text: >- # prompt # output: # url: https://... instance_prompt: cupavhar --- # Cusciencecenter <Gallery /> Trained on Replicate using: https://replicate.com/ostris/flux-dev-lora-trainer/train ## Trigger words You should use `cupavhar` to trigger the image generation. ## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers) ```py from diffusers import AutoPipelineForText2Image import torch pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda') pipeline.load_lora_weights('aheikwok/CUsciencecenter', weight_name='lora.safetensors') image = pipeline('your prompt').images[0] ``` For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
JuanVallejo/taxiv3RL
JuanVallejo
"2024-06-17T04:33:18"
0
0
null
[ "Taxi-v3", "q-learning", "reinforcement-learning", "custom-implementation", "model-index", "region:us" ]
reinforcement-learning
"2024-06-17T04:33:15"
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: taxiv3RL results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.54 +/- 2.73 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="JuanVallejo/taxiv3RL", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
LoneStriker/Tess-34B-v1.5b-8.0bpw-h8-exl2
LoneStriker
"2024-01-29T00:10:47"
2
0
transformers
[ "transformers", "pytorch", "llama", "text-generation", "license:other", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-01-28T23:55:47"
--- license: other license_name: yi-34b license_link: https://huggingface.co/01-ai/Yi-34B-200K/blob/main/LICENSE --- <br> ![Tesoro](https://huggingface.co/migtissera/Tess-M-v1.0/resolve/main/Tess.png) <br> Tess, short for Tesoro (Treasure in Italian), is a general purpose Large Language Model series. Tess-34B-v1.5b was trained on the Yi-34B-200K base. # Prompt Format: ``` SYSTEM: <ANY SYSTEM CONTEXT> USER: ASSISTANT: ```
ashenwhisper/grantlevine
ashenwhisper
"2025-04-30T17:32:55"
0
0
null
[ "license:apache-2.0", "region:us" ]
null
"2025-04-30T17:32:55"
--- license: apache-2.0 ---
jaycentg/som-mbert-focal
jaycentg
"2025-01-20T13:36:35"
6
0
transformers
[ "transformers", "tensorboard", "safetensors", "bert", "text-classification", "generated_from_trainer", "base_model:google-bert/bert-base-multilingual-uncased", "base_model:finetune:google-bert/bert-base-multilingual-uncased", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text-classification
"2025-01-20T13:36:12"
--- library_name: transformers license: apache-2.0 base_model: google-bert/bert-base-multilingual-uncased tags: - generated_from_trainer model-index: - name: som-mbert-focal results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # som-mbert-focal This model is a fine-tuned version of [google-bert/bert-base-multilingual-uncased](https://huggingface.co/google-bert/bert-base-multilingual-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 8.4833 - F1-micro: 0.3198 - F1-macro: 0.3067 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 33 - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1-micro | F1-macro | |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:| | 73.2537 | 1.0 | 85 | 9.2650 | 0.2594 | 0.2558 | | 67.6744 | 2.0 | 170 | 8.4755 | 0.2840 | 0.2757 | | 62.5427 | 3.0 | 255 | 8.2914 | 0.3046 | 0.2956 | | 58.2398 | 4.0 | 340 | 8.4833 | 0.3198 | 0.3067 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 3.2.0 - Tokenizers 0.19.1
marmoh2002/temp
marmoh2002
"2024-12-08T17:06:11"
105
0
transformers
[ "transformers", "safetensors", "florence2", "text-generation", "custom_code", "arxiv:1910.09700", "autotrain_compatible", "region:us" ]
text-generation
"2024-12-08T16:17:25"
--- library_name: transformers tags: [] --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
RichardErkhov/roger33303_-_llama3.2-3b-Instruct-Finetune-website-QnA-8bits
RichardErkhov
"2025-03-05T04:41:09"
0
0
null
[ "safetensors", "llama", "8-bit", "bitsandbytes", "region:us" ]
null
"2025-03-05T04:39:01"
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) llama3.2-3b-Instruct-Finetune-website-QnA - bnb 8bits - Model creator: https://huggingface.co/roger33303/ - Original model: https://huggingface.co/roger33303/llama3.2-3b-Instruct-Finetune-website-QnA/ Original model description: --- base_model: unsloth/llama-3.2-3b-instruct-bnb-4bit language: - en license: apache-2.0 tags: - text-generation-inference - transformers - unsloth - llama - trl model-index: - name: roger33303/llama3.2-3b-Instruct-Finetune-website-QnA results: - task: type: text-generation name: Text Generation dataset: name: Custom type: Custom split: test metrics: - name: SACREBLEU type: sacrebleu value: 85.979319 - task: type: text-generation name: Text Generation dataset: name: Custom type: Custom split: test metrics: - name: CER_SCORE type: cer_score value: 0.137935 - task: type: text-generation name: Text Generation dataset: name: Custom type: Custom split: test metrics: - name: METEOR type: meteor value: 0.903607 - task: type: text-generation name: Text Generation dataset: name: Custom type: Custom split: test metrics: - name: ROUGE1 type: rouge1 value: 0.904733 - task: type: text-generation name: Text Generation dataset: name: Custom type: Custom split: test metrics: - name: ROUGE2 type: rouge2 value: 0.867881 - task: type: text-generation name: Text Generation dataset: name: Custom type: Custom split: test metrics: - name: ROUGEL type: rougeL value: 0.905276 - task: type: text-generation name: Text Generation dataset: name: Custom type: Custom split: test metrics: - name: ROUGELSUM type: rougeLsum value: 0.904084 --- # Uploaded model - **Developed by:** roger33303 - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-3b-instruct-bnb-4bit
HuggingFaceTB/finemath-ablation-finemath-3plus
HuggingFaceTB
"2024-12-19T13:30:09"
6
0
null
[ "safetensors", "llama", "en", "dataset:HuggingFaceTB/finemath", "base_model:meta-llama/Llama-3.2-3B", "base_model:finetune:meta-llama/Llama-3.2-3B", "license:apache-2.0", "region:us" ]
null
"2024-12-18T17:27:30"
--- license: apache-2.0 datasets: - HuggingFaceTB/finemath language: - en base_model: - meta-llama/Llama-3.2-3B --- # Model Card ## Model summary This model is part of the 📐 [FineMath](https://huggingface.co/datasets/HuggingFaceTB/finemath) ablations, we continue pretraining [Llama-3.2-3B](https://huggingface.co/meta-llama/Llama-3.2-3B) base on different math datasets for 60B tokens. The model has 3.21B parameters and 4096 context length. It was trained on **60B tokens** from FineMath-3+ subset of 📐 [FineMath](https://huggingface.co/datasets/HuggingFaceTB/finemath), tokenized using `llama3` tokenizer. - **License**: Apache-2 - **Languages**: English ## Use ### Intended use This model was trained on English math data and is not instruction-tuned, making it intended for text completion in English with a focus on math. It is important to note that the primary intended use case of this model is to compare its performance with other models trained under the same conditions. This model is not necessarily the best possible outcome achievable with the given dataset. ### Generation ```python # pip install -q transformers from transformers import AutoModelForCausalLM, AutoTokenizer model = MODEL_HERE device = "cuda" # for GPU usage or "cpu" for CPU usage tokenizer = AutoTokenizer.from_pretrained(model) model = AutoModelForCausalLM.from_pretrained(model).to(device) inputs = tokenizer.encode("Machine Learning is", return_tensors="pt").to(device) outputs = model.generate(inputs) print(tokenizer.decode(outputs[0])) ``` ## Intermediate checkpoints We are releasing intermediate checkpoints for this model at intervals of every 10000 training steps (10B tokens) in separate branches. The naming convention is `10B`. You can load a specific model revision with `transformers` using the argument `revision`: ```python model = AutoModelForCausalLM.from_pretrained(MODEL_HERE, revision="10B") ``` You can access all the revisions for the models via the following code: ```python from huggingface_hub import list_repo_refs out = list_repo_refs(MODEL_HERE) print([b.name for b in out.branches]) ``` ## Training ### Model - **Architecture**: Llama3 - **Pretraining steps**: 60k - **Pretraining tokens**: 60B - **Precision**: bfloat16 ### Hardware - **GPUs**: 64 H100 ### Software - [nanotron](https://github.com/huggingface/nanotron/) for training - [datatrove](https://github.com/huggingface/datatrove) for tokenization - [lighteval](https://github.com/huggingface/lighteval) for evaluation ## Evaluation We used the SmolLM2 setup to evaluate all our ablation models with `lighteval`. You can find the details here: https://github.com/huggingface/smollm/tree/main/evaluation#smollm2-base-models ## Limitations This model was predominantly trained on English math data, potentially limiting its performance in other languages. Furthermore, the model's behavior is influenced by the quality and diversity of its training data, which may include biases and harmful content.
miketes/Llama-3.2-11B-finetuned-epoch2-wave-ui
miketes
"2025-03-31T06:17:41"
0
0
transformers
[ "transformers", "mllama", "image-text-to-text", "text-generation-inference", "unsloth", "conversational", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
image-text-to-text
"2025-03-31T06:05:38"
--- base_model: unsloth/llama-3.2-11b-vision-instruct-unsloth-bnb-4bit tags: - text-generation-inference - transformers - unsloth - mllama license: apache-2.0 language: - en --- # Uploaded finetuned model - **Developed by:** miketes - **License:** apache-2.0 - **Finetuned from model :** unsloth/llama-3.2-11b-vision-instruct-unsloth-bnb-4bit This mllama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library. [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
Duyen1rt/git-base-caption
Duyen1rt
"2025-02-12T04:43:26"
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "git", "image-text-to-text", "generated_from_trainer", "base_model:microsoft/git-base", "base_model:finetune:microsoft/git-base", "license:mit", "endpoints_compatible", "region:us" ]
image-text-to-text
"2025-02-12T03:05:21"
--- library_name: transformers license: mit base_model: microsoft/git-base tags: - generated_from_trainer model-index: - name: git-base-caption results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # git-base-caption This model is a fine-tuned version of [microsoft/git-base](https://huggingface.co/microsoft/git-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 6.6179 - Wer Score: 9.0625 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer Score | |:-------------:|:-----:|:----:|:---------------:|:---------:| | 10.1592 | 10.0 | 10 | 8.7538 | 31.2656 | | 8.2408 | 20.0 | 20 | 7.7825 | 31.5 | | 7.4378 | 30.0 | 30 | 7.1866 | 15.625 | | 6.9092 | 40.0 | 40 | 6.7962 | 9.375 | | 6.5994 | 50.0 | 50 | 6.6179 | 9.0625 | ### Framework versions - Transformers 4.48.2 - Pytorch 2.5.1+cu124 - Tokenizers 0.21.0
kokovova/782d7391-5da6-49ec-bf69-0278c4309649
kokovova
"2025-04-18T21:00:17"
0
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:unsloth/SmolLM2-360M", "base_model:adapter:unsloth/SmolLM2-360M", "license:apache-2.0", "region:us" ]
null
"2025-04-18T20:56:33"
<!DOCTYPE html> <html class="" lang="en"> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=no" /> <meta name="description" content="We're on a journey to advance and democratize artificial intelligence through open source and open science." /> <meta property="fb:app_id" content="1321688464574422" /> <meta name="twitter:card" content="summary_large_image" /> <meta name="twitter:site" content="@huggingface" /> <meta property="og:title" content="Hugging Face - The AI community building the future." /> <meta property="og:type" content="website" /> <title>Hugging Face - The AI community building the future.</title> <style> body { margin: 0; } main { background-color: white; min-height: 100vh; padding: 7rem 1rem 8rem 1rem; text-align: center; font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system, BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans, sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol, Noto Color Emoji; } img { width: 6rem; height: 6rem; margin: 0 auto 1rem; } h1 { font-size: 3.75rem; line-height: 1; color: rgba(31, 41, 55, 1); font-weight: 700; box-sizing: border-box; margin: 0 auto; } p, a { color: rgba(107, 114, 128, 1); font-size: 1.125rem; line-height: 1.75rem; max-width: 28rem; box-sizing: border-box; margin: 0 auto; } .dark main { background-color: rgb(11, 15, 25); } .dark h1 { color: rgb(209, 213, 219); } .dark p, .dark a { color: rgb(156, 163, 175); } </style> <script> // On page load or when changing themes, best to add inline in `head` to avoid FOUC const key = "_tb_global_settings"; let theme = window.matchMedia("(prefers-color-scheme: dark)").matches ? "dark" : "light"; try { const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme; if (storageTheme) { theme = storageTheme === "dark" ? "dark" : "light"; } } catch (e) {} if (theme === "dark") { document.documentElement.classList.add("dark"); } else { document.documentElement.classList.remove("dark"); } </script> </head> <body> <main> <img src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg" alt="" /> <div> <h1>429</h1> <p>We had to rate limit you. If you think it's an error, send us <a href="mailto:website@huggingface.co">an email</a></p> </div> </main> </body> </html>
QuantFactory/PersianMind-v1.0-GGUF
QuantFactory
"2024-10-03T10:14:32"
74
1
transformers
[ "transformers", "gguf", "text-generation-inference", "text-generation", "multilingual", "fa", "en", "arxiv:2401.06466", "license:cc-by-nc-sa-4.0", "co2_eq_emissions", "region:us" ]
text-generation
"2024-10-03T09:41:09"
--- license: cc-by-nc-sa-4.0 language: - multilingual - fa - en library_name: transformers tags: - text-generation-inference inference: false metrics: - bleu - comet - accuracy - perplexity - spearmanr pipeline_tag: text-generation co2_eq_emissions: emissions: 232380 source: "PersianMind: A Cross-Lingual Persian-English Large Language Model. https://arxiv.org/abs/2401.06466" training_type: "fine-tuning" hardware_used: "4 RTX3090 24GB GPUs" geographical_location: "Tehran, Iran" --- [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory) # QuantFactory/PersianMind-v1.0-GGUF This is quantized version of [universitytehran/PersianMind-v1.0](https://huggingface.co/universitytehran/PersianMind-v1.0) created using llama.cpp # Original Model Card <p align="center"> <img src="PersianMind.jpg" alt="PersianMind logo" width=200/> </p> # <span style="font-variant:small-caps;">PersianMind</span> <span style="font-variant:small-caps;">PersianMind</span> is a cross-lingual Persian-English large language model. The model achieves state-of-the-art results on Persian subset of the [<span style="font-variant:small-caps;">Belebele</span>](https://github.com/facebookresearch/belebele) benchmark and the [ParsiNLU multiple-choice QA](https://github.com/persiannlp/parsinlu) task. It also attains performance comparable to GPT-3.5-turbo in a Persian reading comprehension task. ## Model Description - **Developed by:** [Pedram Rostami](mailto:pedram.rostami@ut.ac.ir), [Ali Salemi](mailto:alisalemi@ut.ac.ir), and [Mohammad Javad Dousti](mailto:mjdousti@ut.ac.ir) - **Model type:** Language model - **Languages:** English and Persian - **License:** [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/) (non-commercial use only.) ## How to Get Started with the Model Use the code below to get started with the model. Note that you need to install <code><b>sentencepiece</b></code> and <code><b>accelerate</b></code> libraries along with <code><b>PyTorch</b></code> and <code><b>🤗Transformers</b></code> to run this code. ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch device = "cuda" if torch.cuda.is_available() else "cpu" model = AutoModelForCausalLM.from_pretrained( "universitytehran/PersianMind-v1.0", torch_dtype=torch.bfloat16, low_cpu_mem_usage=True, device_map={"": device}, ) tokenizer = AutoTokenizer.from_pretrained( "universitytehran/PersianMind-v1.0", ) TEMPLATE = "{context}\nYou: {prompt}\nPersianMind: " CONTEXT = "This is a conversation with PersianMind. It is an artificial intelligence model designed by a team of " \ "NLP experts at the University of Tehran to help you with various tasks such as answering questions, " \ "providing recommendations, and helping with decision making. You can ask it anything you want and " \ "it will do its best to give you accurate and relevant information." PROMPT = "در مورد هوش مصنوعی توضیح بده." model_input = TEMPLATE.format(context=CONTEXT, prompt=PROMPT) input_tokens = tokenizer(model_input, return_tensors="pt") input_tokens = input_tokens.to(device) generate_ids = model.generate(**input_tokens, max_new_tokens=512, do_sample=False, repetition_penalty=1.1) model_output = tokenizer.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0] print(model_output[len(model_input):]) ``` ### How to Quantize the Model Quantized models can be run on resource-constrained devices. To quantize the model, you should install the <code><b>bitsandbytes</b></code> library. In order to quantize the model in 8-bit (`INT8`), use the code below. ```python model = AutoModelForCausalLM.from_pretrained( "universitytehran/PersianMind-v1.0", device_map="auto", low_cpu_mem_usage=True, load_in_8bit=True ) ``` Alternatively, you can quantize the model in 4-bit (`NormalFloat4`) with the following code. ```python from transformers import BitsAndBytesConfig quantization_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type="nf4", ) model = AutoModelForCausalLM.from_pretrained( "universitytehran/PersianMind-v1.0", quantization_config=quantization_config, device_map="auto" ) ``` ### Evaluating Quantized Models | Model | <span style="font-variant:small-caps;">Belebele</span> (Persian) | Fa→En Translation<br>(<span style="font-variant:small-caps;">Comet</span>) | En→Fa Translation<br>(<span style="font-variant:small-caps;">Comet</span>) | Model Size | Tokens/sec | | :----------------------------------------------------------------: | :--------------------------------------------------------------: | :------------------------------------------------------------------------: | :------------------------------------------------------------------------: | :--------: | :--------: | | <span style="font-variant:small-caps;">PersianMind</span> (`BF16`) | 73.9 | 83.61 | 79.44 | 13.7G | 25.35 | | <span style="font-variant:small-caps;">PersianMind</span> (`INT8`) | 73.7 | 82.32 | 78.61 | 7.2G | 11.36 | | <span style="font-variant:small-caps;">PersianMind</span> (`NF4`) | 70.2 | 82.07 | 80.36 | 3.9G | 24.36 | We evaluated quantized models in various tasks against the original model. Specifically, we evaluated all models using the reading comprehension multiple-choice question-answering benchmark of [<span style="font-variant:small-caps;">Belebele</span>](https://github.com/facebookresearch/belebele) (Persian subset) and reported the accuracy of each model. Additionally, we evaluated our models for Persian-to-English and English-to-Persian translation tasks. For this, we utilized the Persian-English subset of the [<span style="font-variant:small-caps;">Flores</span>-200](https://github.com/facebookresearch/flores/tree/main/flores200) dataset and reported our results using the <span style="font-variant:small-caps;">Comet</span> metric. Furthermore, we calculated the average number of generated tokens per second by each model during running the translation tasks. To understand resource efficiency, we measured the memory usage of each model by employing the `get_memory_footprint()` function. ## License <span style="font-variant:small-caps;">PersianMind</span> is subject to Meta's [LLaMa2 Community License](https://raw.githubusercontent.com/facebookresearch/llama/main/LICENSE). It is further licensed under [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/), which allows non-commercial use of the model. Commercial use of this model requires written agreement which must be obtained from the copyright holders who are listed as developers in this page. If you suspect any violations, please reach out to us. ## Citation If you find this model helpful, please ensure to cite the following paper. **BibTeX:** ```bibtex @misc{persianmind, title={{PersianMind: A Cross-Lingual Persian-English Large Language Model}}, author={Rostami, Pedram and Salemi, Ali and Dousti, Mohammad Javad}, year={2024} eprint={2401.06466}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
NikolayKozloff/Llama-2-7b-dolphin-open_platypus-Q8_0-GGUF
NikolayKozloff
"2024-05-15T21:06:33"
3
1
null
[ "gguf", "instruct", "llama-cpp", "gguf-my-repo", "text-generation", "dataset:garage-bAInd/Open-Platypus", "dataset:Open-Orca/OpenOrca", "dataset:cognitivecomputations/dolphin", "base_model:meta-llama/Llama-2-7b-hf", "base_model:quantized:meta-llama/Llama-2-7b-hf", "endpoints_compatible", "region:us" ]
text-generation
"2024-05-15T21:06:12"
--- tags: - instruct - llama-cpp - gguf-my-repo base_model: meta-llama/Llama-2-7b-hf datasets: - garage-bAInd/Open-Platypus - Open-Orca/OpenOrca - cognitivecomputations/dolphin inference: true model_type: llama pipeline_tag: text-generation --- # NikolayKozloff/Llama-2-7b-dolphin-open_platypus-Q8_0-GGUF This model was converted to GGUF format from [`neuralmagic/Llama-2-7b-dolphin-open_platypus`](https://huggingface.co/neuralmagic/Llama-2-7b-dolphin-open_platypus) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingface.co/neuralmagic/Llama-2-7b-dolphin-open_platypus) for more details on the model. ## Use with llama.cpp Install llama.cpp through brew. ```bash brew install ggerganov/ggerganov/llama.cpp ``` Invoke the llama.cpp server or the CLI. CLI: ```bash llama-cli --hf-repo NikolayKozloff/Llama-2-7b-dolphin-open_platypus-Q8_0-GGUF --model llama-2-7b-dolphin-open_platypus.Q8_0.gguf -p "The meaning to life and the universe is" ``` Server: ```bash llama-server --hf-repo NikolayKozloff/Llama-2-7b-dolphin-open_platypus-Q8_0-GGUF --model llama-2-7b-dolphin-open_platypus.Q8_0.gguf -c 2048 ``` Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well. ``` git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m llama-2-7b-dolphin-open_platypus.Q8_0.gguf -n 128 ```
yuhuizhang/finetuned_gpt2-large_sst2_negation0.8
yuhuizhang
"2023-01-07T09:40:08"
4
0
transformers
[ "transformers", "pytorch", "gpt2", "text-generation", "generated_from_trainer", "dataset:sst2", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2023-01-07T08:35:13"
--- license: mit tags: - generated_from_trainer datasets: - sst2 model-index: - name: finetuned_gpt2-large_sst2_negation0.8 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned_gpt2-large_sst2_negation0.8 This model is a fine-tuned version of [gpt2-large](https://huggingface.co/gpt2-large) on the sst2 dataset. It achieves the following results on the evaluation set: - Loss: 3.6201 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.3586 | 1.0 | 1111 | 3.3100 | | 1.812 | 2.0 | 2222 | 3.5114 | | 1.5574 | 3.0 | 3333 | 3.6201 | ### Framework versions - Transformers 4.22.2 - Pytorch 1.12.1+cu113 - Datasets 2.5.2 - Tokenizers 0.12.1
Sarathbabu-Karunanithi/SQLGemma-2-2b-it
Sarathbabu-Karunanithi
"2024-12-10T12:53:21"
8
0
transformers
[ "transformers", "safetensors", "gemma2", "text-generation", "conversational", "dataset:b-mc2/sql-create-context", "arxiv:1910.09700", "license:gemma", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
"2024-12-09T09:00:36"
--- library_name: transformers license: gemma datasets: - b-mc2/sql-create-context --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
duskdagger/YummyV01FL
duskdagger
"2025-01-06T03:39:22"
2,102
0
diffusers
[ "diffusers", "text-to-image", "lora", "template:diffusion-lora", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "region:us" ]
text-to-image
"2025-01-06T03:39:15"
--- tags: - text-to-image - lora - diffusers - template:diffusion-lora widget: - text: '-' output: url: images/cat-6463284_640.jpg base_model: black-forest-labs/FLUX.1-dev instance_prompt: null --- # YummyV01FL <Gallery /> ## Download model Weights for this model are available in Safetensors format. [Download](/duskdagger/YummyV01FL/tree/main) them in the Files & versions tab.
AmaanDhamaskar/mt5-small-marathi_test_newcode
AmaanDhamaskar
"2025-03-16T10:20:50"
0
0
transformers
[ "transformers", "tf", "mt5", "text2text-generation", "generated_from_keras_callback", "base_model:google/mt5-small", "base_model:finetune:google/mt5-small", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
text2text-generation
"2025-03-16T09:35:49"
--- library_name: transformers license: apache-2.0 base_model: google/mt5-small tags: - generated_from_keras_callback model-index: - name: AmaanDhamaskar/mt5-small-marathi_test_newcode results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # AmaanDhamaskar/mt5-small-marathi_test_newcode This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 4.9265 - Validation Loss: 2.9782 - Epoch: 7 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 1664, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 10.7465 | 4.3688 | 0 | | 6.8139 | 3.4018 | 1 | | 5.9290 | 3.1843 | 2 | | 5.5236 | 3.0905 | 3 | | 5.2406 | 3.0334 | 4 | | 5.0940 | 3.0005 | 5 | | 4.9912 | 2.9840 | 6 | | 4.9265 | 2.9782 | 7 | ### Framework versions - Transformers 4.48.3 - TensorFlow 2.18.0 - Datasets 3.4.0 - Tokenizers 0.21.0
lhong4759/21fc7ff2-f11f-4e68-9f5e-446cbf3b5643
lhong4759
"2025-01-09T02:35:47"
9
0
peft
[ "peft", "safetensors", "llama", "axolotl", "generated_from_trainer", "base_model:NousResearch/Hermes-3-Llama-3.1-8B", "base_model:adapter:NousResearch/Hermes-3-Llama-3.1-8B", "license:llama3", "8-bit", "bitsandbytes", "region:us" ]
null
"2025-01-09T02:00:49"
--- library_name: peft license: llama3 base_model: NousResearch/Hermes-3-Llama-3.1-8B tags: - axolotl - generated_from_trainer model-index: - name: 21fc7ff2-f11f-4e68-9f5e-446cbf3b5643 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> [<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl) <details><summary>See axolotl config</summary> axolotl version: `0.4.1` ```yaml adapter: lora base_model: NousResearch/Hermes-3-Llama-3.1-8B bf16: auto chat_template: llama3 dataset_prepared_path: null datasets: - data_files: - 8956d77bc5bc07a2_train_data.json ds_type: json format: custom path: /workspace/input_data/8956d77bc5bc07a2_train_data.json type: field_instruction: question field_output: chosen format: '{instruction}' no_input_format: '{instruction}' system_format: '{system}' system_prompt: '' debug: null deepspeed: null early_stopping_patience: null eval_max_new_tokens: 128 eval_table_size: null evals_per_epoch: 1 flash_attention: true fp16: null fsdp: null fsdp_config: null gradient_accumulation_steps: 4 gradient_checkpointing: true gradient_clipping: 1.0 group_by_length: false hub_model_id: lhong4759/21fc7ff2-f11f-4e68-9f5e-446cbf3b5643 hub_repo: null hub_strategy: end hub_token: null learning_rate: 5.0e-05 load_in_4bit: true load_in_8bit: true local_rank: null logging_steps: 1 lora_alpha: 16 lora_dropout: 0.05 lora_fan_in_fan_out: null lora_model_dir: null lora_r: 8 lora_target_linear: true lr_scheduler: cosine max_steps: 200 micro_batch_size: 2 mlflow_experiment_name: /tmp/8956d77bc5bc07a2_train_data.json model_type: AutoModelForCausalLM num_epochs: 1 optimizer: adamw_bnb_8bit output_dir: miner_id_24 pad_to_sequence_len: true resume_from_checkpoint: null s2_attention: null sample_packing: false saves_per_epoch: 1 sequence_len: 1024 strict: false tf32: false tokenizer_type: AutoTokenizer train_on_inputs: false trust_remote_code: true val_set_size: 0.05 wandb_entity: null wandb_mode: online wandb_name: 21fc7ff2-f11f-4e68-9f5e-446cbf3b5643 wandb_project: Gradients-On-Demand wandb_run: your_name wandb_runid: 21fc7ff2-f11f-4e68-9f5e-446cbf3b5643 warmup_steps: 5 weight_decay: 0.01 xformers_attention: true ``` </details><br> # 21fc7ff2-f11f-4e68-9f5e-446cbf3b5643 This model is a fine-tuned version of [NousResearch/Hermes-3-Llama-3.1-8B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1993 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 8 - optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 5 - training_steps: 200 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.1381 | 0.0909 | 200 | 1.1993 | ### Framework versions - PEFT 0.13.2 - Transformers 4.46.0 - Pytorch 2.5.0+cu124 - Datasets 3.0.1 - Tokenizers 0.20.1
End of preview. Expand in Data Studio

Dataset Card for Hugging Face Hub Model Cards

This datasets consists of model cards for models hosted on the Hugging Face Hub. The model cards are created by the community and provide information about the model, its performance, its intended uses, and more. This dataset is updated on a daily basis and includes publicly available models on the Hugging Face Hub.

This dataset is made available to help support users wanting to work with a large number of Model Cards from the Hub. We hope that this dataset will help support research in the area of Model Cards and their use but the format of this dataset may not be useful for all use cases. If there are other features that you would like to see included in this dataset, please open a new discussion.

Dataset Details

Uses

There are a number of potential uses for this dataset including:

  • text mining to find common themes in model cards
  • analysis of the model card format/content
  • topic modelling of model cards
  • analysis of the model card metadata
  • training language models on model cards

Out-of-Scope Use

[More Information Needed]

Dataset Structure

This dataset has a single split.

Dataset Creation

Curation Rationale

The dataset was created to assist people in working with model cards. In particular it was created to support research in the area of model cards and their use. It is possible to use the Hugging Face Hub API or client library to download model cards and this option may be preferable if you have a very specific use case or require a different format.

Source Data

The source data is README.md files for models hosted on the Hugging Face Hub. We do not include any other supplementary files that may be included in the model card directory.

Data Collection and Processing

The data is downloaded using a CRON job on a daily basis.

Who are the source data producers?

The source data producers are the creators of the model cards on the Hugging Face Hub. This includes a broad variety of people from the community ranging from large companies to individual researchers. We do not gather any information about who created the model card in this repository although this information can be gathered from the Hugging Face Hub API.

Annotations [optional]

There are no additional annotations in this dataset beyond the model card content.

Annotation process

N/A

Who are the annotators?

N/A

Personal and Sensitive Information

We make no effort to anonymize the data. Whilst we don't expect the majority of model cards to contain personal or sensitive information, it is possible that some model cards may contain this information. Model cards may also link to websites or email addresses.

Bias, Risks, and Limitations

Model cards are created by the community and we do not have any control over the content of the model cards. We do not review the content of the model cards and we do not make any claims about the accuracy of the information in the model cards. Some model cards will themselves discuss bias and sometimes this is done by providing examples of bias in either the training data or the responses provided by the model. As a result this dataset may contain examples of bias.

Whilst we do not directly download any images linked to in the model cards, some model cards may include images. Some of these images may not be suitable for all audiences.

Recommendations

Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.

Citation

No formal citation is required for this dataset but if you use this dataset in your work, please include a link to this dataset page.

Dataset Card Authors

@davanstrien

Dataset Card Contact

@davanstrien

Downloads last month
2,010

Collection including librarian-bots/model_cards_with_metadata