modelId
stringlengths 5
138
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-05-02 00:38:27
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 446
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-05-02 00:37:42
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
esb/whisper-aed-librispeech | esb | "2022-10-24T14:40:00Z" | 0 | 0 | null | [
"esb",
"en",
"dataset:esb/datasets",
"dataset:librispeech_asr",
"region:us"
] | null | "2022-10-24T14:39:43Z" | ---
language:
- en
tags:
- esb
datasets:
- esb/datasets
- librispeech_asr
---
To reproduce this run, first install Whisper from the Transformers compatible repo [patrickvonplaten/whisper](https://github.com/patrickvonplaten/whisper):
```
pip install git+https://github.com/openai/whisper.git
```
Then execute the command:
```python
#!/usr/bin/env bash
CUDA_VISIBLE_DEVICES=0 python run_speech_recognition_whisper.py \
--model_name_or_path="medium.en" \
--dataset_name="esb/datasets" \
--dataset_config_name="librispeech" \
--max_steps="5000" \
--output_dir="./" \
--run_name="whisper-librispeech" \
--wandb_project="whisper" \
--per_device_train_batch_size="64" \
--per_device_eval_batch_size="16" \
--logging_steps="25" \
--learning_rate="1e-4" \
--warmup_steps="500" \
--report_to="wandb" \
--preprocessing_num_workers="16" \
--evaluation_strategy="steps" \
--eval_steps="1000" \
--save_strategy="steps" \
--save_steps="1000" \
--generation_max_length="224" \
--length_column_name="input_lengths" \
--gradient_checkpointing \
--group_by_length \
--freeze_encoder \
--fp16 \
--overwrite_output_dir \
--do_train \
--do_eval \
--do_predict \
--predict_with_generate \
--use_auth_token
```
|
Robin021/llama-7b-hf | Robin021 | "2023-04-07T10:47:19Z" | 5 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-04-07T10:36:52Z" | ---
license: other
---
LLaMA-7B converted to work with Transformers/HuggingFace. This is under a special license, please see the LICENSE file for details.
--
license: other
---
# LLaMA Model Card
## Model details
**Organization developing the model**
The FAIR team of Meta AI.
**Model date**
LLaMA was trained between December. 2022 and Feb. 2023.
**Model version**
This is version 1 of the model.
**Model type**
LLaMA is an auto-regressive language model, based on the transformer architecture. The model comes in different sizes: 7B, 13B, 33B and 65B parameters.
**Paper or resources for more information**
More information can be found in the paper “LLaMA, Open and Efficient Foundation Language Models”, available at https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/.
**Citations details**
https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/
**License**
Non-commercial bespoke license
**Where to send questions or comments about the model**
Questions and comments about LLaMA can be sent via the [GitHub repository](https://github.com/facebookresearch/llama) of the project , by opening an issue.
## Intended use
**Primary intended uses**
The primary use of LLaMA is research on large language models, including:
exploring potential applications such as question answering, natural language understanding or reading comprehension,
understanding capabilities and limitations of current language models, and developing techniques to improve those,
evaluating and mitigating biases, risks, toxic and harmful content generations, hallucinations.
**Primary intended users**
The primary intended users of the model are researchers in natural language processing, machine learning and artificial intelligence.
**Out-of-scope use cases**
LLaMA is a base, or foundational, model. As such, it should not be used on downstream applications without further risk evaluation and mitigation. In particular, our model has not been trained with human feedback, and can thus generate toxic or offensive content, incorrect information or generally unhelpful answers.
## Factors
**Relevant factors**
One of the most relevant factors for which model performance may vary is which language is used. Although we included 20 languages in the training data, most of our dataset is made of English text, and we thus expect the model to perform better for English than other languages. Relatedly, it has been shown in previous studies that performance might vary for different dialects, and we expect that it will be the case for our model.
**Evaluation factors**
As our model is trained on data from the Web, we expect that it reflects biases from this source. We thus evaluated on RAI datasets to measure biases exhibited by the model for gender, religion, race, sexual orientation, age, nationality, disability, physical appearance and socio-economic status. We also measure the toxicity of model generations, depending on the toxicity of the context used to prompt the model.
## Metrics
**Model performance measures**
We use the following measure to evaluate the model:
- Accuracy for common sense reasoning, reading comprehension, natural language understanding (MMLU), BIG-bench hard, WinoGender and CrowS-Pairs,
- Exact match for question answering,
- The toxicity score from Perspective API on RealToxicityPrompts.
**Decision thresholds**
Not applicable.
**Approaches to uncertainty and variability**
Due to the high computational requirements of training LLMs, we trained only one model of each size, and thus could not evaluate variability of pre-training.
## Evaluation datasets
The model was evaluated on the following benchmarks: BoolQ, PIQA, SIQA, HellaSwag, WinoGrande, ARC, OpenBookQA, NaturalQuestions, TriviaQA, RACE, MMLU, BIG-bench hard, GSM8k, RealToxicityPrompts, WinoGender, CrowS-Pairs.
## Training dataset
The model was trained using the following source of data: CCNet [67%], C4 [15%], GitHub [4.5%], Wikipedia [4.5%], Books [4.5%], ArXiv [2.5%], Stack Exchange[2%]. The Wikipedia and Books domains include data in the following languages: bg, ca, cs, da, de, en, es, fr, hr, hu, it, nl, pl, pt, ro, ru, sl, sr, sv, uk. See the paper for more details about the training set and corresponding preprocessing.
## Quantitative analysis
Hyperparameters for the model architecture
<table>
<thead>
<tr>
<th >LLaMA</th> <th colspan=6>Model hyper parameters </th>
</tr>
<tr>
<th>Number of parameters</th><th>dimension</th><th>n heads</th><th>n layers</th><th>Learn rate</th><th>Batch size</th><th>n tokens</th>
</tr>
</thead>
<tbody>
<tr>
<th>7B</th> <th>4096</th> <th>32</th> <th>32</th> <th>3.0E-04</th><th>4M</th><th>1T
</tr>
<tr>
<th>13B</th><th>5120</th><th>40</th><th>40</th><th>3.0E-04</th><th>4M</th><th>1T
</tr>
<tr>
<th>33B</th><th>6656</th><th>52</th><th>60</th><th>1.5.E-04</th><th>4M</th><th>1.4T
</tr>
<tr>
<th>65B</th><th>8192</th><th>64</th><th>80</th><th>1.5.E-04</th><th>4M</th><th>1.4T
</tr>
</tbody>
</table>
*Table 1 - Summary of LLama Model Hyperparameters*
We present our results on eight standard common sense reasoning benchmarks in the table below.
<table>
<thead>
<tr>
<th>LLaMA</th> <th colspan=9>Reasoning tasks </th>
</tr>
<tr>
<th>Number of parameters</th> <th>BoolQ</th><th>PIQA</th><th>SIQA</th><th>HellaSwag</th><th>WinoGrande</th><th>ARC-e</th><th>ARC-c</th><th>OBQA</th><th>COPA</th>
</tr>
</thead>
<tbody>
<tr>
<th>7B</th><th>76.5</th><th>79.8</th><th>48.9</th><th>76.1</th><th>70.1</th><th>76.7</th><th>47.6</th><th>57.2</th><th>93
</th>
<tr><th>13B</th><th>78.1</th><th>80.1</th><th>50.4</th><th>79.2</th><th>73</th><th>78.1</th><th>52.7</th><th>56.4</th><th>94
</th>
<tr><th>33B</th><th>83.1</th><th>82.3</th><th>50.4</th><th>82.8</th><th>76</th><th>81.4</th><th>57.8</th><th>58.6</th><th>92
</th>
<tr><th>65B</th><th>85.3</th><th>82.8</th><th>52.3</th><th>84.2</th><th>77</th><th>81.5</th><th>56</th><th>60.2</th><th>94</th></tr>
</tbody>
</table>
*Table 2 - Summary of LLama Model Performance on Reasoning tasks*
We present our results on bias in the table below. Note that lower value is better indicating lower bias.
| No | Category | FAIR LLM |
| --- | -------------------- | -------- |
| 1 | Gender | 70.6 |
| 2 | Religion | 79 |
| 3 | Race/Color | 57 |
| 4 | Sexual orientation | 81 |
| 5 | Age | 70.1 |
| 6 | Nationality | 64.2 |
| 7 | Disability | 66.7 |
| 8 | Physical appearance | 77.8 |
| 9 | Socioeconomic status | 71.5 |
| | LLaMA Average | 66.6 |
*Table 3 - Summary bias of our model output*
## Ethical considerations
**Data**
The data used to train the model is collected from various sources, mostly from the Web. As such, it contains offensive, harmful and biased content. We thus expect the model to exhibit such biases from the training data.
**Human life**
The model is not intended to inform decisions about matters central to human life, and should not be used in such a way.
**Mitigations**
We filtered the data from the Web based on its proximity to Wikipedia text and references. For this, we used a Kneser-Ney language model and a fastText linear classifier.
**Risks and harms**
Risks and harms of large language models include the generation of harmful, offensive or biased content. These models are often prone to generating incorrect information, sometimes referred to as hallucinations. We do not expect our model to be an exception in this regard.
**Use cases**
LLaMA is a foundational model, and as such, it should not be used for downstream applications without further investigation and mitigations of risks. These risks and potential fraught use cases include, but are not limited to: generation of misinformation and generation of harmful, biased or offensive content.
|
dofbi/wolof-asr | dofbi | "2024-12-01T06:56:46Z" | 44 | 0 | null | [
"safetensors",
"whisper",
"audio-text-to-text",
"wo",
"dataset:galsenai/wolof_tts",
"base_model:openai/whisper-small",
"base_model:finetune:openai/whisper-small",
"license:mit",
"region:us"
] | audio-text-to-text | "2024-11-26T14:48:39Z" | ---
license: mit
datasets:
- galsenai/wolof_tts
language:
- wo
metrics:
- accuracy
base_model:
- openai/whisper-small
pipeline_tag: audio-text-to-text
---
# **Whisper for Wolof ASR**
Ce dépôt contient une version fine-tunée du modèle Whisper pour la reconnaissance vocale automatique (ASR) en **Wolof**, une langue parlée principalement au Sénégal, en Gambie, et en Mauritanie. Ce modèle utilise l'architecture Whisper, conçue pour les tâches de transcription vocale et de génération conditionnelle.
---
## **Caractéristiques principales**
- **Architecture basée sur Whisper**
- Encodeur et décodeur composés de 12 couches chacun.
- Utilisation d'une attention multi-tête optimisée (`WhisperSdpaAttention`).
- Gestion d'un vocabulaire étendu de 51 865 tokens pour une grande diversité linguistique.
- **Optimisation pour le Wolof**
- Fine-tuning effectué sur un corpus spécifique en Wolof.
- Capable de transcrire des échantillons audio en texte avec un **Word Error Rate (WER)** compétitif.
- **Exemples d'application**
- Transcription audio de conversations en Wolof.
- Utilisation dans des contextes académiques, éducatifs et de recherche linguistique.
---
## **Performances**
- **WER moyen** : **12%**
- **WER sur des échantillons bruyants** : **15%**
- Évaluations basées sur des données de test spécifiques au Wolof.
---
## **Exemple d'utilisation**
Voici un exemple simple pour utiliser le modèle avec la bibliothèque Hugging Face Transformers :
```python
from transformers import WhisperForConditionalGeneration, WhisperProcessor
import torch
# Charger le modèle et le processeur
model = WhisperForConditionalGeneration.from_pretrained("votre-nom-dépôt")
processor = WhisperProcessor.from_pretrained("votre-nom-dépôt")
# Prétraiter l'audio (spectrogramme ou entrée audio bruite)
audio_input = ... # Charger un spectrogramme ou des données audio prétraitées
inputs = processor(audio_input, return_tensors="pt").input_features
# Générer la transcription
predicted_ids = model.generate(inputs)
transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
print("Transcription :", transcription)
```
---
## **Guide d'installation**
1. Clonez ce dépôt :
```bash
git clone https://huggingface.co/dofbi/wolof-asr
cd votre-dépôt
```
2. Installez les dépendances :
```bash
pip install transformers torch torchaudio
```
3. Testez un exemple avec un fichier audio :
```python
python app.py --audio_file chemin/vers/audio.wav
```
---
## **Fine-tuning du modèle**
Si vous souhaitez adapter ce modèle à vos propres données, voici les étapes principales :
1. Préparez vos données sous forme d'échantillons audio et de transcriptions textuelles.
2. Utilisez le script de fine-tuning fourni (voir `src/trainer.py`) avec vos données :
```bash
python src/trainer.py --train_data chemin/vers/données_train.json --val_data chemin/vers/données_val.json
```
3. Sauvegardez le modèle fine-tuné et chargez-le comme montré dans les exemples ci-dessus.
---
## **À propos**
Ce modèle a été développé dans le cadre d'un projet visant à promouvoir la reconnaissance vocale pour les langues sous-représentées comme le Wolof. N'hésitez pas à contribuer, signaler des problèmes, ou proposer des améliorations via les issues de ce dépôt.
---
## **Licence**
Ce modèle est publié sous la licence MIT. Consultez le fichier `LICENSE` pour plus de détails.
---
Enrichissez-le davantage si vous ajoutez de nouvelles fonctionnalités, comme des tests ou des scripts complémentaires. |
genki10/ASAP_FineTuningBERT_AugV10_k5_task1_organization_k5_k5_fold4 | genki10 | "2025-02-13T03:30:26Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-02-13T03:06:53Z" | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: ASAP_FineTuningBERT_AugV10_k5_task1_organization_k5_k5_fold4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ASAP_FineTuningBERT_AugV10_k5_task1_organization_k5_k5_fold4
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7661
- Qwk: 0.5270
- Mse: 0.7661
- Rmse: 0.8753
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|
| No log | 1.0 | 4 | 8.1195 | 0.0 | 8.1195 | 2.8495 |
| No log | 2.0 | 8 | 5.7076 | -0.0143 | 5.7076 | 2.3891 |
| No log | 3.0 | 12 | 3.2181 | 0.0040 | 3.2181 | 1.7939 |
| No log | 4.0 | 16 | 1.6812 | 0.0622 | 1.6812 | 1.2966 |
| No log | 5.0 | 20 | 1.1474 | 0.0213 | 1.1474 | 1.0712 |
| No log | 6.0 | 24 | 1.6032 | 0.0605 | 1.6032 | 1.2662 |
| No log | 7.0 | 28 | 0.8685 | 0.1794 | 0.8685 | 0.9320 |
| No log | 8.0 | 32 | 1.0612 | 0.0981 | 1.0612 | 1.0301 |
| No log | 9.0 | 36 | 0.8339 | 0.2947 | 0.8339 | 0.9132 |
| No log | 10.0 | 40 | 0.8960 | 0.3388 | 0.8960 | 0.9466 |
| No log | 11.0 | 44 | 0.7682 | 0.4303 | 0.7682 | 0.8764 |
| No log | 12.0 | 48 | 0.6747 | 0.4586 | 0.6747 | 0.8214 |
| No log | 13.0 | 52 | 1.0150 | 0.3952 | 1.0150 | 1.0075 |
| No log | 14.0 | 56 | 0.6675 | 0.5234 | 0.6675 | 0.8170 |
| No log | 15.0 | 60 | 0.9004 | 0.4332 | 0.9004 | 0.9489 |
| No log | 16.0 | 64 | 1.0515 | 0.4016 | 1.0515 | 1.0254 |
| No log | 17.0 | 68 | 0.7313 | 0.5316 | 0.7313 | 0.8551 |
| No log | 18.0 | 72 | 0.8971 | 0.4511 | 0.8971 | 0.9472 |
| No log | 19.0 | 76 | 0.7584 | 0.5102 | 0.7584 | 0.8709 |
| No log | 20.0 | 80 | 0.7135 | 0.5438 | 0.7135 | 0.8447 |
| No log | 21.0 | 84 | 1.0331 | 0.4139 | 1.0331 | 1.0164 |
| No log | 22.0 | 88 | 0.7536 | 0.5342 | 0.7536 | 0.8681 |
| No log | 23.0 | 92 | 0.7678 | 0.5089 | 0.7678 | 0.8762 |
| No log | 24.0 | 96 | 0.9259 | 0.4624 | 0.9259 | 0.9623 |
| No log | 25.0 | 100 | 0.7625 | 0.5576 | 0.7625 | 0.8732 |
| No log | 26.0 | 104 | 0.7826 | 0.5259 | 0.7826 | 0.8846 |
| No log | 27.0 | 108 | 0.8239 | 0.4534 | 0.8239 | 0.9077 |
| No log | 28.0 | 112 | 0.7630 | 0.5093 | 0.7630 | 0.8735 |
| No log | 29.0 | 116 | 0.7821 | 0.5358 | 0.7821 | 0.8844 |
| No log | 30.0 | 120 | 0.7519 | 0.5261 | 0.7519 | 0.8671 |
| No log | 31.0 | 124 | 0.8739 | 0.4759 | 0.8739 | 0.9348 |
| No log | 32.0 | 128 | 0.8679 | 0.4772 | 0.8679 | 0.9316 |
| No log | 33.0 | 132 | 0.8149 | 0.5078 | 0.8149 | 0.9027 |
| No log | 34.0 | 136 | 0.9182 | 0.4519 | 0.9182 | 0.9582 |
| No log | 35.0 | 140 | 0.8011 | 0.5110 | 0.8011 | 0.8951 |
| No log | 36.0 | 144 | 0.8547 | 0.4691 | 0.8547 | 0.9245 |
| No log | 37.0 | 148 | 0.7940 | 0.5220 | 0.7940 | 0.8910 |
| No log | 38.0 | 152 | 0.7559 | 0.5605 | 0.7559 | 0.8694 |
| No log | 39.0 | 156 | 0.8591 | 0.4696 | 0.8591 | 0.9269 |
| No log | 40.0 | 160 | 0.7570 | 0.5390 | 0.7570 | 0.8700 |
| No log | 41.0 | 164 | 1.0017 | 0.4321 | 1.0017 | 1.0008 |
| No log | 42.0 | 168 | 0.7519 | 0.5313 | 0.7519 | 0.8671 |
| No log | 43.0 | 172 | 0.9590 | 0.4481 | 0.9590 | 0.9793 |
| No log | 44.0 | 176 | 0.7704 | 0.5317 | 0.7704 | 0.8777 |
| No log | 45.0 | 180 | 0.7769 | 0.4870 | 0.7769 | 0.8814 |
| No log | 46.0 | 184 | 0.7592 | 0.5299 | 0.7592 | 0.8713 |
| No log | 47.0 | 188 | 0.8565 | 0.4869 | 0.8565 | 0.9255 |
| No log | 48.0 | 192 | 0.7904 | 0.4852 | 0.7904 | 0.8890 |
| No log | 49.0 | 196 | 0.7772 | 0.5132 | 0.7772 | 0.8816 |
| No log | 50.0 | 200 | 0.7194 | 0.5583 | 0.7194 | 0.8482 |
| No log | 51.0 | 204 | 0.7371 | 0.5240 | 0.7371 | 0.8586 |
| No log | 52.0 | 208 | 0.7949 | 0.4944 | 0.7949 | 0.8916 |
| No log | 53.0 | 212 | 0.7686 | 0.5068 | 0.7686 | 0.8767 |
| No log | 54.0 | 216 | 0.8027 | 0.5031 | 0.8027 | 0.8959 |
| No log | 55.0 | 220 | 0.7894 | 0.5101 | 0.7894 | 0.8885 |
| No log | 56.0 | 224 | 0.7427 | 0.5148 | 0.7427 | 0.8618 |
| No log | 57.0 | 228 | 0.7925 | 0.5080 | 0.7925 | 0.8902 |
| No log | 58.0 | 232 | 0.7661 | 0.5270 | 0.7661 | 0.8753 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
danylov/Reinforce-Cartpole | danylov | "2024-02-25T14:32:01Z" | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2024-02-25T14:31:52Z" | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: Reinforce-Cartpole
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 496.00 +/- 12.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
DRAGOO/mounir4 | DRAGOO | "2023-05-29T11:38:26Z" | 75 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2023-05-28T19:47:49Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: mounir4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mounir4
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6829
- Wer: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 10000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:-----:|:---------------:|:---:|
| 3.3494 | 8.51 | 500 | 3.1482 | 1 |
| 2.9331 | 17.02 | 1000 | 2.9053 | 1 |
| 2.8691 | 25.53 | 1500 | 2.8793 | 1 |
| 2.8393 | 34.04 | 2000 | 2.8696 | 1 |
| 1.9588 | 42.55 | 2500 | 1.5982 | 1 |
| 0.9108 | 51.06 | 3000 | 0.8335 | 1 |
| 0.7196 | 59.57 | 3500 | 0.7443 | 1 |
| 0.6198 | 68.09 | 4000 | 0.6949 | 1 |
| 0.5558 | 76.6 | 4500 | 0.6862 | 1 |
| 0.5152 | 85.11 | 5000 | 0.6743 | 1 |
| 0.4781 | 93.62 | 5500 | 0.6668 | 1 |
| 0.4442 | 102.13 | 6000 | 0.6587 | 1 |
| 0.4255 | 110.64 | 6500 | 0.6498 | 1 |
| 0.408 | 119.15 | 7000 | 0.6698 | 1 |
| 0.3888 | 127.66 | 7500 | 0.6739 | 1 |
| 0.3815 | 136.17 | 8000 | 0.6754 | 1 |
| 0.3704 | 144.68 | 8500 | 0.6843 | 1 |
| 0.3625 | 153.19 | 9000 | 0.6707 | 1 |
| 0.356 | 161.7 | 9500 | 0.6812 | 1 |
| 0.3541 | 170.21 | 10000 | 0.6829 | 1 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
somosnlp-hackathon-2023/SalpiBloom_base_lr3e4_1b1 | somosnlp-hackathon-2023 | "2023-03-27T23:44:49Z" | 0 | 1 | adapter-transformers | [
"adapter-transformers",
"es",
"license:apache-2.0",
"region:us"
] | null | "2023-03-27T23:32:12Z" | ---
license: apache-2.0
language:
- es
library_name: adapter-transformers
---
<div style="text-align:center;width:350px;height:350px;">
<img src="https://huggingface.co/hackathon-somos-nlp-2023/SalpiBloom-1b1/resolve/main/salpibloom.png" alt="SAlpaca logo"">
</div>
# SAlpiBloom: Spanish + Alpaca + Bloom (WIP)
Learning rate = 3e-4
## Adapter Description
This adapter was created with the [PEFT](https://github.com/huggingface/peft) library and allowed the base model [bigscience/bloom-1b1](https://huggingface.co/bigscience/bloom-1b1) to be fine-tuned on the [Spanish Alpaca Dataset](https://huggingface.co/datasets/bertin-project/alpaca-spanish) by using the method *LoRA*.
## How to use
```py
import torch
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer
peft_model_id = "hackathon-somos-nlp-2023/SalpiBloom_base_lr3e4_1b1"
config = PeftConfig.from_pretrained(peft_model_id)
model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, return_dict=True, load_in_8bit=True, device_map='auto')
# tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
tokenizer = AutoTokenizer.from_pretrained(peft_model_id)
# Load the Lora model
model = PeftModel.from_pretrained(model, peft_model_id)
def gen_conversation(text):
text = "<SC>instruction: " + text + "\n "
batch = tokenizer(text, return_tensors='pt')
with torch.cuda.amp.autocast():
output_tokens = model.generate(**batch, max_new_tokens=256, eos_token_id=50258, early_stopping = True, temperature=.9)
print('\n\n', tokenizer.decode(output_tokens[0], skip_special_tokens=False))
text = "Redacta un cuento corto"
gen_conversation(text)
```
## Resources used
Google Colab machine with the following specifications
<div style="text-align:center;width:550px;height:550px;">
<img src="https://huggingface.co/hackathon-somos-nlp-2023/bertin-gpt-j-6B-es-finetuned-salpaca/resolve/main/resource.jpeg" alt="Resource logo">
</div>
## Citation
```
@misc {hackathon-somos-nlp-2023,
author = { {Edison Bejarano, Leonardo Bolaños, Alberto Ceballos, Santiago Pineda, Nicolay Potes} },
title = { SalpiBloom_base_lr3e4_1b1 },
year = 2023,
url = { https://huggingface.co/hackathon-somos-nlp-2023/SalpiBloom_base_lr3e4_1b1 }
publisher = { Hugging Face }
}
``` |
areegtarek/patientcommunication-4bit | areegtarek | "2024-02-06T13:35:16Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-02-06T13:32:50Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
cleanrl/Solaris-v5-cleanba_ppo_envpool_machado_atari_wrapper-seed1 | cleanrl | "2023-03-02T22:20:43Z" | 0 | 0 | cleanrl | [
"cleanrl",
"tensorboard",
"Solaris-v5",
"deep-reinforcement-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2023-03-02T22:20:37Z" | ---
tags:
- Solaris-v5
- deep-reinforcement-learning
- reinforcement-learning
- custom-implementation
library_name: cleanrl
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Solaris-v5
type: Solaris-v5
metrics:
- type: mean_reward
value: 1644.00 +/- 848.99
name: mean_reward
verified: false
---
# (CleanRL) **PPO** Agent Playing **Solaris-v5**
This is a trained model of a PPO agent playing Solaris-v5.
The model was trained by using [CleanRL](https://github.com/vwxyzjn/cleanrl) and the most up-to-date training code can be
found [here](https://github.com/vwxyzjn/cleanrl/blob/master/cleanrl/cleanba_ppo_envpool_machado_atari_wrapper.py).
## Get Started
To use this model, please install the `cleanrl` package with the following command:
```
pip install "cleanrl[jax,envpool,atari]"
python -m cleanrl_utils.enjoy --exp-name cleanba_ppo_envpool_machado_atari_wrapper --env-id Solaris-v5
```
Please refer to the [documentation](https://docs.cleanrl.dev/get-started/zoo/) for more detail.
## Command to reproduce the training
```bash
curl -OL https://huggingface.co/cleanrl/Solaris-v5-cleanba_ppo_envpool_machado_atari_wrapper-seed1/raw/main/cleanba_ppo_envpool_machado_atari_wrapper.py
curl -OL https://huggingface.co/cleanrl/Solaris-v5-cleanba_ppo_envpool_machado_atari_wrapper-seed1/raw/main/pyproject.toml
curl -OL https://huggingface.co/cleanrl/Solaris-v5-cleanba_ppo_envpool_machado_atari_wrapper-seed1/raw/main/poetry.lock
poetry install --all-extras
python cleanba_ppo_envpool_machado_atari_wrapper.py --distributed --learner-device-ids 1 2 3 --track --wandb-project-name cleanba --save-model --upload-model --hf-entity cleanrl --env-id Solaris-v5 --seed 1
```
# Hyperparameters
```python
{'actor_device_ids': [0],
'actor_devices': ['gpu:0'],
'anneal_lr': True,
'async_batch_size': 20,
'async_update': 3,
'batch_size': 15360,
'capture_video': False,
'clip_coef': 0.1,
'concurrency': True,
'cuda': True,
'distributed': True,
'ent_coef': 0.01,
'env_id': 'Solaris-v5',
'exp_name': 'cleanba_ppo_envpool_machado_atari_wrapper',
'gae_lambda': 0.95,
'gamma': 0.99,
'global_learner_decices': ['gpu:1',
'gpu:2',
'gpu:3',
'gpu:5',
'gpu:6',
'gpu:7'],
'hf_entity': 'cleanrl',
'learner_device_ids': [1, 2, 3],
'learner_devices': ['gpu:1', 'gpu:2', 'gpu:3'],
'learning_rate': 0.00025,
'local_batch_size': 7680,
'local_minibatch_size': 1920,
'local_num_envs': 60,
'local_rank': 0,
'max_grad_norm': 0.5,
'minibatch_size': 3840,
'norm_adv': True,
'num_envs': 120,
'num_minibatches': 4,
'num_steps': 128,
'num_updates': 3255,
'profile': False,
'save_model': True,
'seed': 1,
'target_kl': None,
'test_actor_learner_throughput': False,
'torch_deterministic': True,
'total_timesteps': 50000000,
'track': True,
'update_epochs': 4,
'upload_model': True,
'vf_coef': 0.5,
'wandb_entity': None,
'wandb_project_name': 'cleanba',
'world_size': 2}
```
|
genki10/Version19ASAP_FineTuningBERT_AugV19_k3_task1_organization_k3_k3_fold2 | genki10 | "2025-03-09T16:44:48Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-03-09T16:30:37Z" | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: Version19ASAP_FineTuningBERT_AugV19_k3_task1_organization_k3_k3_fold2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Version19ASAP_FineTuningBERT_AugV19_k3_task1_organization_k3_k3_fold2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6490
- Qwk: 0.5666
- Mse: 0.6486
- Rmse: 0.8054
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:------:|
| No log | 1.0 | 1 | 13.5050 | 0.0 | 13.5052 | 3.6749 |
| No log | 2.0 | 2 | 11.5637 | -0.0008 | 11.5641 | 3.4006 |
| No log | 3.0 | 3 | 9.8983 | 0.0066 | 9.8989 | 3.1462 |
| No log | 4.0 | 4 | 9.1109 | 0.0 | 9.1116 | 3.0185 |
| No log | 5.0 | 5 | 7.7460 | 0.0 | 7.7465 | 2.7832 |
| No log | 6.0 | 6 | 7.4989 | 0.0 | 7.4992 | 2.7385 |
| No log | 7.0 | 7 | 6.9785 | 0.0012 | 6.9789 | 2.6418 |
| No log | 8.0 | 8 | 5.8114 | 0.0348 | 5.8118 | 2.4108 |
| No log | 9.0 | 9 | 4.7188 | 0.0246 | 4.7193 | 2.1724 |
| No log | 10.0 | 10 | 4.1510 | 0.0117 | 4.1514 | 2.0375 |
| No log | 11.0 | 11 | 3.7349 | 0.0078 | 3.7353 | 1.9327 |
| No log | 12.0 | 12 | 3.3987 | 0.0039 | 3.3992 | 1.8437 |
| No log | 13.0 | 13 | 2.7531 | 0.0050 | 2.7536 | 1.6594 |
| No log | 14.0 | 14 | 2.3576 | 0.1027 | 2.3581 | 1.5356 |
| No log | 15.0 | 15 | 2.0313 | 0.1127 | 2.0318 | 1.4254 |
| No log | 16.0 | 16 | 1.8495 | 0.1161 | 1.8501 | 1.3602 |
| No log | 17.0 | 17 | 1.6576 | 0.0844 | 1.6581 | 1.2877 |
| No log | 18.0 | 18 | 1.3280 | 0.0372 | 1.3284 | 1.1526 |
| No log | 19.0 | 19 | 1.2753 | 0.0345 | 1.2757 | 1.1295 |
| No log | 20.0 | 20 | 1.1213 | 0.0280 | 1.1217 | 1.0591 |
| No log | 21.0 | 21 | 1.0098 | 0.0345 | 1.0102 | 1.0051 |
| No log | 22.0 | 22 | 1.0711 | 0.0557 | 1.0715 | 1.0351 |
| No log | 23.0 | 23 | 0.9404 | 0.1447 | 0.9408 | 0.9699 |
| No log | 24.0 | 24 | 0.7685 | 0.3570 | 0.7688 | 0.8768 |
| No log | 25.0 | 25 | 0.7339 | 0.3557 | 0.7341 | 0.8568 |
| No log | 26.0 | 26 | 0.7044 | 0.3325 | 0.7047 | 0.8394 |
| No log | 27.0 | 27 | 0.6950 | 0.3496 | 0.6952 | 0.8338 |
| No log | 28.0 | 28 | 0.6681 | 0.3402 | 0.6682 | 0.8175 |
| No log | 29.0 | 29 | 0.6925 | 0.2644 | 0.6926 | 0.8322 |
| No log | 30.0 | 30 | 0.6615 | 0.2835 | 0.6614 | 0.8133 |
| No log | 31.0 | 31 | 0.5934 | 0.3651 | 0.5934 | 0.7703 |
| No log | 32.0 | 32 | 0.5838 | 0.3687 | 0.5837 | 0.7640 |
| No log | 33.0 | 33 | 0.6327 | 0.3058 | 0.6325 | 0.7953 |
| No log | 34.0 | 34 | 0.6656 | 0.3004 | 0.6653 | 0.8157 |
| No log | 35.0 | 35 | 0.6137 | 0.3878 | 0.6134 | 0.7832 |
| No log | 36.0 | 36 | 0.5510 | 0.4655 | 0.5509 | 0.7422 |
| No log | 37.0 | 37 | 0.5408 | 0.4856 | 0.5406 | 0.7352 |
| No log | 38.0 | 38 | 0.5734 | 0.5117 | 0.5729 | 0.7569 |
| No log | 39.0 | 39 | 0.6217 | 0.5105 | 0.6210 | 0.7880 |
| No log | 40.0 | 40 | 0.6008 | 0.5556 | 0.6001 | 0.7746 |
| No log | 41.0 | 41 | 0.5415 | 0.5581 | 0.5410 | 0.7355 |
| No log | 42.0 | 42 | 0.5450 | 0.5695 | 0.5444 | 0.7379 |
| No log | 43.0 | 43 | 0.5895 | 0.5782 | 0.5887 | 0.7673 |
| No log | 44.0 | 44 | 0.5937 | 0.5775 | 0.5929 | 0.7700 |
| No log | 45.0 | 45 | 0.5810 | 0.5930 | 0.5804 | 0.7619 |
| No log | 46.0 | 46 | 0.6049 | 0.6010 | 0.6043 | 0.7773 |
| No log | 47.0 | 47 | 0.6624 | 0.5693 | 0.6614 | 0.8133 |
| No log | 48.0 | 48 | 0.6771 | 0.5722 | 0.6761 | 0.8222 |
| No log | 49.0 | 49 | 0.6359 | 0.5985 | 0.6351 | 0.7969 |
| No log | 50.0 | 50 | 0.6374 | 0.5931 | 0.6367 | 0.7980 |
| No log | 51.0 | 51 | 0.6486 | 0.5966 | 0.6478 | 0.8049 |
| No log | 52.0 | 52 | 0.6985 | 0.5745 | 0.6976 | 0.8352 |
| No log | 53.0 | 53 | 0.7386 | 0.5580 | 0.7376 | 0.8588 |
| No log | 54.0 | 54 | 0.7130 | 0.5697 | 0.7122 | 0.8439 |
| No log | 55.0 | 55 | 0.7334 | 0.5660 | 0.7329 | 0.8561 |
| No log | 56.0 | 56 | 0.7161 | 0.5565 | 0.7157 | 0.8460 |
| No log | 57.0 | 57 | 0.6601 | 0.5770 | 0.6596 | 0.8121 |
| No log | 58.0 | 58 | 0.7107 | 0.5713 | 0.7099 | 0.8425 |
| No log | 59.0 | 59 | 0.8449 | 0.5333 | 0.8437 | 0.9185 |
| No log | 60.0 | 60 | 0.8551 | 0.5341 | 0.8539 | 0.9241 |
| No log | 61.0 | 61 | 0.7541 | 0.5513 | 0.7533 | 0.8679 |
| No log | 62.0 | 62 | 0.6697 | 0.5619 | 0.6693 | 0.8181 |
| No log | 63.0 | 63 | 0.6730 | 0.5656 | 0.6727 | 0.8202 |
| No log | 64.0 | 64 | 0.6709 | 0.5660 | 0.6705 | 0.8188 |
| No log | 65.0 | 65 | 0.6706 | 0.5645 | 0.6701 | 0.8186 |
| No log | 66.0 | 66 | 0.6490 | 0.5666 | 0.6486 | 0.8054 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.3.1
- Tokenizers 0.21.0
|
genki10/BERT_V8_sp10_lw40_ex50_lo50_k2_k2_fold3 | genki10 | "2025-04-26T16:17:15Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-04-26T16:03:08Z" | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: BERT_V8_sp10_lw40_ex50_lo50_k2_k2_fold3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT_V8_sp10_lw40_ex50_lo50_k2_k2_fold3
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5564
- Qwk: 0.5636
- Mse: 0.5564
- Rmse: 0.7459
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 100
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:------:|
| No log | 1.0 | 2 | 11.9063 | -0.0279 | 11.9038 | 3.4502 |
| No log | 2.0 | 4 | 9.2561 | 0.0 | 9.2545 | 3.0421 |
| No log | 3.0 | 6 | 7.3906 | 0.0 | 7.3892 | 2.7183 |
| No log | 4.0 | 8 | 6.1470 | 0.0104 | 6.1455 | 2.4790 |
| No log | 5.0 | 10 | 4.7307 | 0.0114 | 4.7296 | 2.1748 |
| No log | 6.0 | 12 | 3.6712 | 0.0038 | 3.6702 | 1.9158 |
| No log | 7.0 | 14 | 2.9987 | 0.0 | 2.9976 | 1.7313 |
| No log | 8.0 | 16 | 2.2411 | 0.1175 | 2.2402 | 1.4967 |
| No log | 9.0 | 18 | 1.7994 | 0.0365 | 1.7986 | 1.3411 |
| No log | 10.0 | 20 | 1.5119 | 0.0302 | 1.5112 | 1.2293 |
| No log | 11.0 | 22 | 1.1775 | 0.0302 | 1.1769 | 1.0849 |
| No log | 12.0 | 24 | 1.0014 | 0.0202 | 1.0008 | 1.0004 |
| No log | 13.0 | 26 | 0.8976 | 0.3195 | 0.8970 | 0.9471 |
| No log | 14.0 | 28 | 0.8228 | 0.3673 | 0.8222 | 0.9068 |
| No log | 15.0 | 30 | 0.8523 | 0.2800 | 0.8517 | 0.9229 |
| No log | 16.0 | 32 | 0.7316 | 0.2326 | 0.7313 | 0.8551 |
| No log | 17.0 | 34 | 0.6307 | 0.3785 | 0.6306 | 0.7941 |
| No log | 18.0 | 36 | 0.6821 | 0.4362 | 0.6819 | 0.8258 |
| No log | 19.0 | 38 | 0.5622 | 0.4421 | 0.5621 | 0.7497 |
| No log | 20.0 | 40 | 0.5782 | 0.4760 | 0.5781 | 0.7603 |
| No log | 21.0 | 42 | 0.6375 | 0.5111 | 0.6373 | 0.7983 |
| No log | 22.0 | 44 | 0.5141 | 0.5263 | 0.5140 | 0.7169 |
| No log | 23.0 | 46 | 0.7375 | 0.4655 | 0.7373 | 0.8587 |
| No log | 24.0 | 48 | 0.5200 | 0.5555 | 0.5201 | 0.7212 |
| No log | 25.0 | 50 | 0.5088 | 0.6219 | 0.5088 | 0.7133 |
| No log | 26.0 | 52 | 0.7115 | 0.5143 | 0.7114 | 0.8435 |
| No log | 27.0 | 54 | 0.7079 | 0.4985 | 0.7077 | 0.8412 |
| No log | 28.0 | 56 | 0.5335 | 0.5843 | 0.5335 | 0.7304 |
| No log | 29.0 | 58 | 0.6170 | 0.5431 | 0.6170 | 0.7855 |
| No log | 30.0 | 60 | 0.5067 | 0.5731 | 0.5068 | 0.7119 |
| No log | 31.0 | 62 | 0.6167 | 0.5527 | 0.6169 | 0.7854 |
| No log | 32.0 | 64 | 0.5177 | 0.5983 | 0.5178 | 0.7196 |
| No log | 33.0 | 66 | 0.5050 | 0.6377 | 0.5049 | 0.7106 |
| No log | 34.0 | 68 | 0.5707 | 0.5896 | 0.5706 | 0.7554 |
| No log | 35.0 | 70 | 0.6511 | 0.5396 | 0.6510 | 0.8068 |
| No log | 36.0 | 72 | 0.5217 | 0.5770 | 0.5215 | 0.7222 |
| No log | 37.0 | 74 | 0.5531 | 0.5585 | 0.5529 | 0.7436 |
| No log | 38.0 | 76 | 0.6864 | 0.4928 | 0.6862 | 0.8284 |
| No log | 39.0 | 78 | 0.6373 | 0.5037 | 0.6372 | 0.7982 |
| No log | 40.0 | 80 | 0.5506 | 0.5552 | 0.5506 | 0.7420 |
| No log | 41.0 | 82 | 0.5623 | 0.5400 | 0.5622 | 0.7498 |
| No log | 42.0 | 84 | 0.6502 | 0.5007 | 0.6500 | 0.8062 |
| No log | 43.0 | 86 | 0.5781 | 0.5547 | 0.5779 | 0.7602 |
| No log | 44.0 | 88 | 0.5708 | 0.5663 | 0.5706 | 0.7554 |
| No log | 45.0 | 90 | 0.6341 | 0.5154 | 0.6339 | 0.7962 |
| No log | 46.0 | 92 | 0.5815 | 0.5502 | 0.5815 | 0.7626 |
| No log | 47.0 | 94 | 0.6164 | 0.5149 | 0.6164 | 0.7851 |
| No log | 48.0 | 96 | 0.5450 | 0.5598 | 0.5450 | 0.7382 |
| No log | 49.0 | 98 | 0.5788 | 0.5238 | 0.5788 | 0.7608 |
| No log | 50.0 | 100 | 0.5876 | 0.5205 | 0.5875 | 0.7665 |
| No log | 51.0 | 102 | 0.5490 | 0.5597 | 0.5489 | 0.7409 |
| No log | 52.0 | 104 | 0.5796 | 0.5422 | 0.5795 | 0.7612 |
| No log | 53.0 | 106 | 0.5875 | 0.5350 | 0.5874 | 0.7664 |
| No log | 54.0 | 108 | 0.5454 | 0.5723 | 0.5453 | 0.7384 |
| No log | 55.0 | 110 | 0.5645 | 0.5452 | 0.5643 | 0.7512 |
| No log | 56.0 | 112 | 0.5550 | 0.5512 | 0.5550 | 0.7450 |
| No log | 57.0 | 114 | 0.5879 | 0.5476 | 0.5878 | 0.7667 |
| No log | 58.0 | 116 | 0.5735 | 0.5610 | 0.5734 | 0.7572 |
| No log | 59.0 | 118 | 0.5411 | 0.5687 | 0.5410 | 0.7356 |
| No log | 60.0 | 120 | 0.5391 | 0.5799 | 0.5391 | 0.7342 |
| No log | 61.0 | 122 | 0.6113 | 0.5802 | 0.6113 | 0.7818 |
| No log | 62.0 | 124 | 0.6268 | 0.5376 | 0.6267 | 0.7916 |
| No log | 63.0 | 126 | 0.5564 | 0.5636 | 0.5564 | 0.7459 |
### Framework versions
- Transformers 4.51.1
- Pytorch 2.5.1+cu124
- Datasets 3.5.0
- Tokenizers 0.21.0
|
MHGanainy/gpt2-xl-lora-multi-512-k5-31-im-2 | MHGanainy | "2024-11-03T19:41:31Z" | 39 | 0 | peft | [
"peft",
"tensorboard",
"safetensors",
"generated_from_trainer",
"base_model:openai-community/gpt2-xl",
"base_model:adapter:openai-community/gpt2-xl",
"license:mit",
"region:us"
] | null | "2024-11-03T13:13:18Z" | ---
library_name: peft
license: mit
base_model: openai-community/gpt2-xl
tags:
- generated_from_trainer
model-index:
- name: gpt2-xl-lora-multi-512-k5-31-im-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-xl-lora-multi-512-k5-31-im-2
This model is a fine-tuned version of [openai-community/gpt2-xl](https://huggingface.co/openai-community/gpt2-xl) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3324
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 16
- total_eval_batch_size: 16
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- lr_scheduler_warmup_steps: 33797
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.1
- Pytorch 2.1.0a0+32f93b1
- Datasets 3.1.0
- Tokenizers 0.20.1 |
xinyixiuxiu/albert-xxlarge-v2-SST2-finetuned-epoch2 | xinyixiuxiu | "2023-03-26T12:09:18Z" | 61 | 0 | transformers | [
"transformers",
"tf",
"albert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-03-26T10:52:19Z" | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: xinyixiuxiu/albert-xxlarge-v2-SST2-finetuned-epoch2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# xinyixiuxiu/albert-xxlarge-v2-SST2-finetuned-epoch2
This model is a fine-tuned version of [albert-xxlarge-v2](https://huggingface.co/albert-xxlarge-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1021
- Train Accuracy: 0.9660
- Validation Loss: 0.1217
- Validation Accuracy: 0.9553
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 3e-06, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.1863 | 0.9313 | 0.1210 | 0.9610 | 0 |
| 0.1021 | 0.9660 | 0.1217 | 0.9553 | 1 |
### Framework versions
- Transformers 4.21.1
- TensorFlow 2.7.0
- Datasets 2.10.1
- Tokenizers 0.12.1
|
Paul27/CyberXpert-llama-3.2-3b-1.1 | Paul27 | "2025-04-22T08:35:30Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-04-22T08:35:18Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:website@huggingface.co">an email</a></p>
</div>
</main>
</body>
</html> |
lesso03/94f3b478-4b57-4c79-8575-76dc53053d6c | lesso03 | "2025-04-10T07:16:04Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-04-10T07:00:49Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:website@huggingface.co">an email</a></p>
</div>
</main>
</body>
</html> |
jon-fernandes/whisper-small-200 | jon-fernandes | "2025-04-02T21:08:56Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-04-02T21:08:53Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
kvcsrnt/renataai | kvcsrnt | "2023-11-22T17:45:06Z" | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"stable-diffusion",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-11-22T17:40:24Z" | ---
license: creativeml-openrail-m
tags:
- text-to-image
- stable-diffusion
---
### renataai Dreambooth model trained by kvcsrnt with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:
|
bowilleatyou/f8dcd121-25ad-49bd-be57-3133d95d0f86 | bowilleatyou | "2025-03-24T11:43:58Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-03-24T11:19:37Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
Xenova/wangchanberta-base-att-spm-uncased | Xenova | "2024-10-08T13:42:19Z" | 4 | 0 | transformers.js | [
"transformers.js",
"onnx",
"camembert",
"fill-mask",
"base_model:airesearch/wangchanberta-base-att-spm-uncased",
"base_model:quantized:airesearch/wangchanberta-base-att-spm-uncased",
"region:us"
] | fill-mask | "2023-09-06T01:02:09Z" | ---
base_model: airesearch/wangchanberta-base-att-spm-uncased
library_name: transformers.js
---
https://huggingface.co/airesearch/wangchanberta-base-att-spm-uncased with ONNX weights to be compatible with Transformers.js.
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`). |
anvorja/xlm-roberta-large-clinical-ner-data-clean-inconcluso-3-subtokens-con-I | anvorja | "2025-03-20T07:53:59Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"base_model:FacebookAI/xlm-roberta-large",
"base_model:finetune:FacebookAI/xlm-roberta-large",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | "2025-03-20T05:41:52Z" | ---
library_name: transformers
license: mit
base_model: FacebookAI/xlm-roberta-large
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: xlm-roberta-large-clinical-ner-data-clean-inconcluso-3-subtokens-con-I
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-large-clinical-ner-data-clean-inconcluso-3-subtokens-con-I
This model is a fine-tuned version of [FacebookAI/xlm-roberta-large](https://huggingface.co/FacebookAI/xlm-roberta-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0150
- Precision: 0.9856
- Recall: 0.9909
- F1: 0.9882
- Accuracy: 0.9957
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-------:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 2.3378 | 1.0 | 86 | 2.2693 | 0.0 | 0.0 | 0.0 | 0.6217 |
| 1.0419 | 2.0 | 172 | 0.9392 | 0.5312 | 0.3984 | 0.4553 | 0.7981 |
| 0.4857 | 3.0 | 258 | 0.3492 | 0.7619 | 0.7946 | 0.7779 | 0.9199 |
| 0.2688 | 4.0 | 344 | 0.1959 | 0.8397 | 0.9124 | 0.8745 | 0.9544 |
| 0.1815 | 5.0 | 430 | 0.1181 | 0.9060 | 0.9328 | 0.9192 | 0.9718 |
| 0.1215 | 6.0 | 516 | 0.0908 | 0.9374 | 0.95 | 0.9437 | 0.9780 |
| 0.1049 | 7.0 | 602 | 0.0791 | 0.9278 | 0.9602 | 0.9437 | 0.9799 |
| 0.0976 | 8.0 | 688 | 0.0556 | 0.9556 | 0.9715 | 0.9635 | 0.9864 |
| 0.0675 | 9.0 | 774 | 0.0492 | 0.9635 | 0.9785 | 0.9709 | 0.9886 |
| 0.0648 | 10.0 | 860 | 0.0362 | 0.9682 | 0.9806 | 0.9744 | 0.9906 |
| 0.0434 | 11.0 | 946 | 0.0319 | 0.9729 | 0.9828 | 0.9778 | 0.9918 |
| 0.0405 | 12.0 | 1032 | 0.0301 | 0.9724 | 0.9849 | 0.9786 | 0.9924 |
| 0.0484 | 13.0 | 1118 | 0.0267 | 0.9792 | 0.9876 | 0.9834 | 0.9932 |
| 0.0359 | 14.0 | 1204 | 0.0199 | 0.9808 | 0.9876 | 0.9842 | 0.9941 |
| 0.0395 | 15.0 | 1290 | 0.0174 | 0.9845 | 0.9882 | 0.9863 | 0.9951 |
| 0.0278 | 16.0 | 1376 | 0.0158 | 0.9824 | 0.9892 | 0.9858 | 0.9951 |
| 0.0297 | 17.0 | 1462 | 0.0153 | 0.9829 | 0.9892 | 0.9861 | 0.9952 |
| 0.0217 | 18.0 | 1548 | 0.0151 | 0.9856 | 0.9903 | 0.9879 | 0.9955 |
| 0.0324 | 19.0 | 1634 | 0.0150 | 0.9856 | 0.9909 | 0.9882 | 0.9957 |
| 0.0236 | 19.7719 | 1700 | 0.0150 | 0.9856 | 0.9909 | 0.9882 | 0.9957 |
### Framework versions
- Transformers 4.49.0
- Pytorch 2.6.0+cu124
- Datasets 3.4.1
- Tokenizers 0.21.1
|
madisongrace99/generation2 | madisongrace99 | "2023-11-10T03:40:38Z" | 3 | 0 | transformers | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-11-09T22:41:26Z" | ---
tags:
- generated_from_trainer
model-index:
- name: generation2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# generation2
This model was trained from scratch on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Framework versions
- Transformers 4.34.1
- Pytorch 2.1.0
- Datasets 2.14.6
- Tokenizers 0.14.1
|
koshimaki/dinosiglip-224px-1b-abs | koshimaki | "2024-11-27T08:44:58Z" | 104 | 0 | transformers | [
"transformers",
"safetensors",
"prismatic",
"feature-extraction",
"custom_code",
"arxiv:1910.09700",
"region:us"
] | feature-extraction | "2024-11-27T08:42:08Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
peggypeng/merged_model | peggypeng | "2025-04-12T08:38:15Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-12T08:30:04Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
tuantmdev/518c4492-1946-4f0a-806a-7165c56edef6 | tuantmdev | "2025-02-24T11:41:05Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0",
"base_model:adapter:WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0",
"license:llama3",
"region:us"
] | null | "2025-02-24T10:58:28Z" | ---
library_name: peft
license: llama3
base_model: WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 518c4492-1946-4f0a-806a-7165c56edef6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
auto_find_batch_size: true
base_model: WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- bcd38f0f32e12400_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/bcd38f0f32e12400_train_data.json
type:
field_input: outline
field_instruction: topic
field_output: markdown
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: 3
eval_max_new_tokens: 128
eval_steps: 50
eval_table_size: null
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: false
group_by_length: true
hub_model_id: tuantmdev/518c4492-1946-4f0a-806a-7165c56edef6
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 1e-4
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 40
lora_alpha: 64
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 32
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 400
micro_batch_size: 2
mlflow_experiment_name: /tmp/bcd38f0f32e12400_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 50
save_strategy: steps
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 83c38fde-0fb0-4c37-ad81-6ceb85aac8e2
wandb_project: Gradients-On-Demand
wandb_run: unknown
wandb_runid: 83c38fde-0fb0-4c37-ad81-6ceb85aac8e2
warmup_steps: 80
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 518c4492-1946-4f0a-806a-7165c56edef6
This model is a fine-tuned version of [WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0](https://huggingface.co/WhiteRabbitNeo/Llama-3-WhiteRabbitNeo-8B-v2.0) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4134
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 80
- training_steps: 400
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0015 | 1 | 0.6393 |
| 0.5639 | 0.0742 | 50 | 0.4896 |
| 0.4889 | 0.1484 | 100 | 0.4574 |
| 0.4605 | 0.2226 | 150 | 0.4425 |
| 0.4365 | 0.2968 | 200 | 0.4315 |
| 0.4369 | 0.3710 | 250 | 0.4240 |
| 0.432 | 0.4452 | 300 | 0.4173 |
| 0.4213 | 0.5194 | 350 | 0.4141 |
| 0.4242 | 0.5936 | 400 | 0.4134 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
Jayem-11/mistral_7b_malawi | Jayem-11 | "2024-02-06T19:43:42Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-02-06T13:42:24Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
hopkins/mbart-finetuned-eng-kor-34 | hopkins | "2023-07-03T01:33:22Z" | 110 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"mbart",
"text2text-generation",
"translation",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | translation | "2023-07-03T01:15:53Z" | ---
tags:
- translation
- generated_from_trainer
metrics:
- bleu
model-index:
- name: mbart-finetuned-eng-kor-34
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mbart-finetuned-eng-kor-34
This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbart-large-50-many-to-many-mmt) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9937
- Bleu: 7.1397
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
huwhitememes/ayannapressley-dev2pro-lora | huwhitememes | "2025-02-16T22:52:49Z" | 29 | 0 | diffusers | [
"diffusers",
"text-to-image",
"flux",
"lora",
"template:sd-lora",
"fluxgym",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-02-12T02:01:34Z" | ---
tags:
- text-to-image
- flux
- lora
- diffusers
- template:sd-lora
- fluxgym
widget:
- output:
url: sample/ayannapressley-dev2pro-lora_009600_00_20250211162928.png
text: A photo of Ayanna Pressley, Ayanna Pressley,
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: A photo of Ayanna Pressley, Ayanna Pressley,
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
---
# ayannapressley-dev2pro-lora
A Flux LoRA trained on a local computer with [Fluxgym](https://github.com/cocktailpeanut/fluxgym)
<Gallery />
## Trigger words
You should use `A photo of Ayanna Pressley, Ayanna Pressley,` to trigger the image generation.
## Download model and use it with ComfyUI, AUTOMATIC1111, SD.Next, Invoke AI, Forge, etc.
Weights for this model are available in Safetensors format.
|
Lambent/CosmoAlpacaLisa-1b | Lambent | "2024-04-05T13:21:42Z" | 139 | 0 | transformers | [
"transformers",
"pytorch",
"llama",
"text-generation",
"generated_from_trainer",
"base_model:HuggingFaceTB/cosmo-1b",
"base_model:finetune:HuggingFaceTB/cosmo-1b",
"license:cc",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-04-04T15:13:58Z" | ---
license: cc
base_model: HuggingFaceTB/cosmo-1b
tags:
- generated_from_trainer
model-index:
- name: lisa-out
results: []
---
Trying out some LISA training.
A few too many numbers changed to be quite directly comparable, but here's the nous-eval comparisons with the CosmoAlpacaLight using LORA:
| Model |AGIEval|GPT4All|TruthfulQA|Bigbench|Average|
|-----------------------------------------------------------------------|------:|------:|---------:|-------:|------:|
|[CosmoAlpacaLisa-1b](https://huggingface.co/Lambent/CosmoAlpacaLisa-1b)| 23.89| 51.93| 39.93| 28.68| 36.11|
| Model |AGIEval|GPT4All|TruthfulQA|Bigbench|Average|
|-------------------------------------------------------------------------|------:|------:|---------:|-------:|------:|
|[CosmoAlpacaLight-1b](https://huggingface.co/Lambent/CosmoAlpacaLight-1b)| 24.28| 51.31| 40.33| 29.47| 36.35|
| Model |AGIEval|GPT4All|TruthfulQA|Bigbench|Average|
|---------------------------------------------------------|------:|------:|---------:|-------:|------:|
|[cosmo-1b](https://huggingface.co/HuggingFaceTB/cosmo-1b)| 22.97| 52.01| 38.02| 28.73| 35.43|
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.0`
```yaml
base_model: HuggingFaceTB/cosmo-1b
model_type: LlamaForCausalLM
tokenizer_type: LlamaTokenizer
load_in_8bit: false
load_in_4bit: false
strict: false
datasets:
- path: vicgalle/alpaca-gpt4
type: alpaca
dataset_prepared_path:
val_set_size: 0.05
output_dir: ./lisa-out
sequence_len: 2048
sample_packing: true
pad_to_sequence_len: true
adapter:
lora_model_dir:
lora_r:
lora_alpha:
lora_dropout:
lora_target_linear:
lora_fan_in_fan_out:
lisa_n_layers: 8
lisa_step_interval: 10
lisa_layers_attribute: model.layers
wandb_project: CosmoAlpacaLisa-1b-v0.1
wandb_entity:
wandb_watch:
wandb_name:
wandb_log_model:
gradient_accumulation_steps: 4
micro_batch_size: 2
num_epochs: 1
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 5e-5
train_on_inputs: false
group_by_length: false
bf16: auto
fp16:
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 10
evals_per_epoch: 4
saves_per_epoch: 1
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
```
</details><br>
# lisa-out
This model is a fine-tuned version of [HuggingFaceTB/cosmo-1b](https://huggingface.co/HuggingFaceTB/cosmo-1b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0634
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.2281 | 0.0 | 1 | 1.2636 |
| 1.0796 | 0.25 | 166 | 1.0695 |
| 1.0272 | 0.5 | 332 | 1.0644 |
| 1.0471 | 0.75 | 498 | 1.0634 |
### Framework versions
- Transformers 4.40.0.dev0
- Pytorch 2.1.2+cu118
- Datasets 2.18.0
- Tokenizers 0.15.0
|
kendrickfff/vit-emotion | kendrickfff | "2024-08-31T09:43:46Z" | 5 | 0 | null | [
"safetensors",
"vit",
"generated_from_trainer",
"dataset:imagefolder",
"base_model:google/vit-base-patch16-224-in21k",
"base_model:finetune:google/vit-base-patch16-224-in21k",
"license:apache-2.0",
"model-index",
"region:us"
] | null | "2024-08-31T08:52:20Z" | ---
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-emotion
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.61875
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-emotion
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1858
- Accuracy: 0.6188
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.8403 | 1.0 | 40 | 1.7317 | 0.3063 |
| 1.4783 | 2.0 | 80 | 1.5047 | 0.4938 |
| 1.1866 | 3.0 | 120 | 1.3522 | 0.55 |
| 0.8581 | 4.0 | 160 | 1.2084 | 0.575 |
| 0.6056 | 5.0 | 200 | 1.2348 | 0.5375 |
| 0.3745 | 6.0 | 240 | 1.2119 | 0.5625 |
| 0.2129 | 7.0 | 280 | 1.2012 | 0.5437 |
| 0.1547 | 8.0 | 320 | 1.2181 | 0.5875 |
| 0.1216 | 9.0 | 360 | 1.2196 | 0.5875 |
| 0.1023 | 10.0 | 400 | 1.1858 | 0.6188 |
| 0.102 | 11.0 | 440 | 1.2190 | 0.5938 |
| 0.083 | 12.0 | 480 | 1.2149 | 0.6125 |
| 0.0917 | 13.0 | 520 | 1.2600 | 0.5875 |
| 0.0807 | 14.0 | 560 | 1.2367 | 0.6062 |
| 0.0741 | 15.0 | 600 | 1.2382 | 0.6 |
| 0.0721 | 16.0 | 640 | 1.2464 | 0.5875 |
| 0.0678 | 17.0 | 680 | 1.2548 | 0.5938 |
| 0.0752 | 18.0 | 720 | 1.2591 | 0.5875 |
| 0.0657 | 19.0 | 760 | 1.2590 | 0.6062 |
| 0.0643 | 20.0 | 800 | 1.2589 | 0.5938 |
### Framework versions
- Transformers 4.42.4
- Pytorch 2.4.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
|
nomadrp/mdpo-th-v16 | nomadrp | "2025-04-27T18:38:25Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"dpo",
"arxiv:2305.18290",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"endpoints_compatible",
"region:us"
] | null | "2025-04-27T17:23:20Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:website@huggingface.co">an email</a></p>
</div>
</main>
</body>
</html> |
mradermacher/Ognoexperiment27Multi_verse_model-7B-GGUF | mradermacher | "2024-12-21T08:20:11Z" | 5 | 0 | transformers | [
"transformers",
"gguf",
"merge",
"mergekit",
"lazymergekit",
"automerger",
"en",
"base_model:automerger/Ognoexperiment27Multi_verse_model-7B",
"base_model:quantized:automerger/Ognoexperiment27Multi_verse_model-7B",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2024-12-21T08:02:52Z" | ---
base_model: automerger/Ognoexperiment27Multi_verse_model-7B
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- merge
- mergekit
- lazymergekit
- automerger
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/automerger/Ognoexperiment27Multi_verse_model-7B
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Ognoexperiment27Multi_verse_model-7B-GGUF/resolve/main/Ognoexperiment27Multi_verse_model-7B.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/Ognoexperiment27Multi_verse_model-7B-GGUF/resolve/main/Ognoexperiment27Multi_verse_model-7B.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/Ognoexperiment27Multi_verse_model-7B-GGUF/resolve/main/Ognoexperiment27Multi_verse_model-7B.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Ognoexperiment27Multi_verse_model-7B-GGUF/resolve/main/Ognoexperiment27Multi_verse_model-7B.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Ognoexperiment27Multi_verse_model-7B-GGUF/resolve/main/Ognoexperiment27Multi_verse_model-7B.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/Ognoexperiment27Multi_verse_model-7B-GGUF/resolve/main/Ognoexperiment27Multi_verse_model-7B.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Ognoexperiment27Multi_verse_model-7B-GGUF/resolve/main/Ognoexperiment27Multi_verse_model-7B.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Ognoexperiment27Multi_verse_model-7B-GGUF/resolve/main/Ognoexperiment27Multi_verse_model-7B.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/Ognoexperiment27Multi_verse_model-7B-GGUF/resolve/main/Ognoexperiment27Multi_verse_model-7B.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/Ognoexperiment27Multi_verse_model-7B-GGUF/resolve/main/Ognoexperiment27Multi_verse_model-7B.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Ognoexperiment27Multi_verse_model-7B-GGUF/resolve/main/Ognoexperiment27Multi_verse_model-7B.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Ognoexperiment27Multi_verse_model-7B-GGUF/resolve/main/Ognoexperiment27Multi_verse_model-7B.f16.gguf) | f16 | 14.6 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
juniorrios/llama-3.2 | juniorrios | "2025-03-25T04:02:28Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-03-25T04:01:04Z" | ---
base_model: unsloth/llama-3.2-1b-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** juniorrios
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.2-1b-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
medspaner/dccuchile-bert-base-spanish-wwm-uncased-re-ct-v2 | medspaner | "2025-01-10T17:41:12Z" | 14 | 0 | transformers | [
"transformers",
"safetensors",
"bert",
"es",
"base_model:dccuchile/bert-base-spanish-wwm-uncased",
"base_model:finetune:dccuchile/bert-base-spanish-wwm-uncased",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | "2024-12-12T16:36:52Z" | ---
library_name: transformers
language:
- es
base_model:
- dccuchile/bert-base-spanish-wwm-uncased
license: cc-by-nc-4.0
metrics:
- accuracy
- precision
- recall
- f1
---
# Model Card for dccuchile-bert-base-spanish-wwm-uncased-re-ct
This relation extraction model extracts intervention-associated relationships, temporal relations, negation/speculation and others relevant
for clinical trials.
The model achieves the following results on the test set (when trained with the training and development set; results are averaged over 5 evaluation rounds):
- Precision: 0.868 (±0.009)
- Recall: 0.857 (±0.006)
- F1: 0.862 (±0.006)
- Accuracy: 0.907 (±0.003)
## Model description
This model adapts the pre-trained model [bert-base-spanish-wwm-uncased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased).
It is fine-tuned to conduct relation extraction on Spanish texts about clinical trials.
The model is fine-tuned on the [Clinical Trials for Evidence-Based-Medicine in Spanish corpus](http://www.lllf.uam.es/ESP/nlpdata/wp2/).
If you use this model, please, cite as follows:
```
@article{campillosetal2025,
title = {{Benchmarking Transformer Models for Relation Extraction and Concept Normalization in a Clinical Trials Corpus}},
author = {Campillos-Llanos, Leonardo and Valverde-Mateos, Ana and Capllonch-Carri{\'o}n, Adri{\'a}n and Zakhir-Puig, Sof{\'i}a and Heras-Vicente, J{\'o}nathan},
journal = {(Under review)},
year={2025}
}
```
## Intended uses & limitations
**Disclosure**: *This model is under development and needs to be improved. It should not be used for medical decision making without human assistance and supervision*
This model is intended for a generalist purpose, and may have bias and/or any other undesirable distortions.
Third parties who deploy or provide systems and/or services using any of these models (or using systems based on these models) should note that it is their responsibility to mitigate the risks arising from their use. Third parties, in any event, need to comply with applicable regulations, including regulations concerning the use of artificial intelligence.
The owner or creator of the models will in no event be liable for any results arising from the use made by third parties of these models.
**Descargo de responsabilidad**: *Esta herramienta se encuentra en desarrollo y no debe ser empleada para la toma de decisiones médicas*
La finalidad de este modelo es generalista, y se advierte que puede tener sesgos y/u otro tipo de distorsiones indeseables.
Terceras partes que desplieguen o proporcionen sistemas y/o servicios usando alguno de estos modelos (o utilizando sistemas basados en estos modelos) han tener presente que es su responsabilidad abordar y minimizar los riesgos derivados de su uso. Las terceras partes, en cualquier circunstancia, deben cumplir con la normativa aplicable, incluyendo la normativa que concierne al uso de la inteligencia artificial.
El propietario o creador de los modelos de ningún modo será responsable de los resultados derivados del uso que las terceras partes hagan de estos modelos.
## Training and evaluation data
The data used for fine-tuning are the [Clinical Trials for Evidence-Based-Medicine in Spanish corpus](http://www.lllf.uam.es/ESP/nlpdata/wp2/) version 3 (annotated with semantic relationships).
It is a collection of 1200 texts about clinical trials studies and clinical trials announcements:
- 500 abstracts from journals published under a Creative Commons license, e.g. available in PubMed or the Scientific Electronic Library Online (SciELO)
- 700 clinical trials announcements published in the European Clinical Trials Register and Repositorio Español de Estudios Clínicos
The CT-EBM-ES resource (version 1) can be cited as follows:
```
@article{campillosetal-midm2021,
title = {A clinical trials corpus annotated with UMLS© entities to enhance the access to Evidence-Based Medicine},
author = {Campillos-Llanos, Leonardo and Valverde-Mateos, Ana and Capllonch-Carri{\'o}n, Adri{\'a}n and Moreno-Sandoval, Antonio},
journal = {BMC Medical Informatics and Decision Making},
volume={21},
number={1},
pages={1--19},
year={2021},
publisher={BioMed Central}
}
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: we used different seeds for 5 evaluation rounds, and uploaded the model with the best results
- optimizer: AdamW
- weight decay: 1e-2
- lr_scheduler_type: linear
- num_epochs: 5 epochs.
### Training results (test set; average and standard deviation of 5 rounds with different seeds)
| Precision | Recall | F1 | Accuracy |
|:--------------:|:--------------:|:--------------:|:--------------:|
| 0.877 (±0.009) | 0.857 (±0.006) | 0.862 (±0.006) | 0.907 (±0.003) |
**Results per class (test set; best model)**
| Class | Precision | Recall | F1 | Support |
|:---------------:|:--------------:|:--------------:|:--------------:|:---------:|
| Experiences | 0.96 | 0.98 | 0.97 | 2003 |
| Has_Age | 0.93 | 0.82 | 0.87 | 152 |
| Has_Dose_or_Strength | 0.79 | 0.83 | 0.81 | 189 |
| Has_Drug_Form | 0.91 | 0.80 | 0.85 | 64 |
| Has_Duration_or_Interval | 0.79 | 0.82 | 0.81 | 365 |
| Has_Frequency | 0.84 | 0.75 | 0.79 | 84 |
| Has_Quantifier_or_Qualifier | 0.89 | 0.89 | 0.89 | 1040 |
| Has_Result_or_Value | 0.91 | 0.91 | 0.91 | 384 |
| Has_Route_or_Mode | 0.89 | 0.83 | 0.86 | 221 |
| Has_Time_Data | 0.89 | 0.83 | 0.86 | 589 |
| Location_of | 0.94 | 0.97 | 0.96 | 1119 |
| Used_for | 0.86 | 0.88 | 0.87 | 731 |
### Usage
To use this model you need to install the datasets library.
```shell
pip install datasets
```
Then you can define the necessary functions and classes to load the model.
```python
from transformers import (
BertTokenizerFast, BertModel, BertForPreTraining, BertConfig, BertPreTrainedModel,
DataCollatorWithPadding,AutoTokenizer
)
from transformers.modeling_outputs import SequenceClassifierOutput
import torch
import torch.nn as nn
from datasets import Dataset
from torch.utils.data import DataLoader
class BertForRelationExtraction(BertPreTrainedModel):
def __init__(self, config, num_labels):
super(BertForRelationExtraction, self).__init__(config)
self.num_labels = num_labels
# body
self.bert = BertModel(config)
# head
self.dropout = nn.Dropout(config.hidden_dropout_prob)
self.layer_norm = nn.LayerNorm(config.hidden_size * 2)
self.linear = nn.Linear(config.hidden_size * 2, self.num_labels)
self.init_weights()
def forward(self, input_ids, token_type_ids, attention_mask,
span_idxs, labels=None):
outputs = (
self.bert(input_ids, token_type_ids=token_type_ids,
attention_mask=attention_mask,
output_hidden_states=False)
.last_hidden_state)
sub_maxpool, obj_maxpool = [], []
for bid in range(outputs.size(0)):
# span includes entity markers, maxpool across span
sub_span = torch.max(outputs[bid, span_idxs[bid, 0]:span_idxs[bid, 1]+1, :],
dim=0, keepdim=True).values
obj_span = torch.max(outputs[bid, span_idxs[bid, 2]:span_idxs[bid, 3]+1, :],
dim=0, keepdim=True).values
sub_maxpool.append(sub_span)
obj_maxpool.append(obj_span)
sub_emb = torch.cat(sub_maxpool, dim=0)
obj_emb = torch.cat(obj_maxpool, dim=0)
rel_input = torch.cat((sub_emb, obj_emb), dim=-1)
rel_input = self.layer_norm(rel_input)
rel_input = self.dropout(rel_input)
logits = self.linear(rel_input)
if labels is not None:
loss_fn = nn.CrossEntropyLoss()
loss = loss_fn(logits.view(-1, self.num_labels), labels.view(-1))
return SequenceClassifierOutput(loss, logits)
else:
return SequenceClassifierOutput(None, logits)
id2label = {0: 'Experiences',
1: 'Has_Age',
2: 'Has_Dose_or_Strength',
3: 'Has_Duration_or_Interval',
4: 'Has_Frequency',
5: 'Has_Route_or_Mode',
6: 'Location_of',
7: 'Used_for'}
def encode_data_inference(token_list,tokenizer):
tokenized_inputs = tokenizer(token_list,
is_split_into_words=True,
truncation=True)
span_idxs = []
for input_id in tokenized_inputs.input_ids:
tokens = tokenizer.convert_ids_to_tokens(input_id)
span_idxs.append([
[idx for idx, token in enumerate(tokens) if token.startswith("<S:")][0],
[idx for idx, token in enumerate(tokens) if token.startswith("</S:")][0],
[idx for idx, token in enumerate(tokens) if token.startswith("<O:")][0],
[idx for idx, token in enumerate(tokens) if token.startswith("</O:")][0]
])
tokenized_inputs["span_idxs"] = span_idxs
# tokenized_inputs["labels"] = [label2id[label] for label in examples["label"]]
return tokenized_inputs
def predict_example(example,model,tokenizer):
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
collate_fn = DataCollatorWithPadding(tokenizer, padding="longest", return_tensors="pt")
encoded_data = encode_data_inference(example,tokenizer)
inferenceds = Dataset.from_dict(encoded_data)
inference_dl = DataLoader(inferenceds,
shuffle=False,
# sampler=SubsetRandomSampler(np.random.randint(0, encoded_nyt_dataset["test"].num_rows, 100).tolist()),
batch_size=1,
collate_fn=collate_fn)
for batch in inference_dl:
batch = {k: v.to(device) for k, v in batch.items()}
with torch.no_grad():
outputs = model(**batch)
predictions = torch.argmax(outputs.logits, dim=-1).cpu().numpy()
return [id2label[p] for p in predictions]
```
Finally, you can use it to make predictions:
```python
example = [['Título',
'público:',
'Estudio',
'multicéntrico,',
'aleatorizado,',
'doble',
'ciego,',
'controlado',
'con',
'placebo',
'del',
'anticuerpo',
'monoclonal',
'humano',
'anti-TNF',
'<O:CHE>',
'Adalimumab',
'</O:CHE>',
'en',
'<S:LIV>',
'sujetos',
'pediátricos',
'</S:LIV>',
'con',
'colitis',
'ulcerosa',
'moderada',
'o',
'grave']]
model = BertForRelationExtraction.from_pretrained("medspaner/dccuchile-bert-base-spanish-wwm-uncased-re-ct-v2",8)
tokenizer = AutoTokenizer.from_pretrained("medspaner/dccuchile-bert-base-spanish-wwm-uncased-re-ct-v2")
predict_example(example,model,tokenizer)
```
### Framework versions
- Transformers 4.42.4
- Pytorch 2.0.1+cu117
- Datasets 2.15.0
- Tokenizers 0.19.1 |
Eagelaxis/Cetus-mix_version2 | Eagelaxis | "2023-02-25T04:03:02Z" | 0 | 5 | null | [
"license:creativeml-openrail-m",
"region:us"
] | null | "2023-02-25T04:03:02Z" | ---
license: creativeml-openrail-m
---
|
pizi0314/pi-model | pizi0314 | "2024-06-02T17:54:37Z" | 0 | 0 | null | [
"region:us"
] | null | "2024-06-02T17:49:39Z" | <!-- Improved compatibility of back to top link: See: https://github.com/othneildrew/Best-README-Template/pull/73 -->
<a name="readme-top"></a>
<!--
*** Thanks for checking out the Best-README-Template. If you have a suggestion
*** that would make this better, please fork the repo and create a pull request
*** or simply open an issue with the tag "enhancement".
*** Don't forget to give the project a star!
*** Thanks again! Now go create something AMAZING! :D
-->
<!-- PROJECT SHIELDS -->
<!--
*** I'm using markdown "reference style" links for readability.
*** Reference links are enclosed in brackets [ ] instead of parentheses ( ).
*** See the bottom of this document for the declaration of the reference variables
*** for contributors-url, forks-url, etc. This is an optional, concise syntax you may use.
*** https://www.markdownguide.org/basic-syntax/#reference-style-links
-->
[![Contributors][contributors-shield]][contributors-url]
[![Forks][forks-shield]][forks-url]
[![Stargazers][stars-shield]][stars-url]
[![Issues][issues-shield]][issues-url]
[![MIT License][license-shield]][license-url]
[![LinkedIn][linkedin-shield]][linkedin-url]
<!-- PROJECT LOGO -->
<br />
<div align="center">
<a href="https://github.com/othneildrew/Best-README-Template">
<img src="images/logo.png" alt="Logo" width="80" height="80">
</a>
<h3 align="center">Best-README-Template</h3>
<p align="center">
An awesome README template to jumpstart your projects!
<br />
<a href="https://github.com/othneildrew/Best-README-Template"><strong>Explore the docs »</strong></a>
<br />
<br />
<a href="https://github.com/othneildrew/Best-README-Template">View Demo</a>
·
<a href="https://github.com/othneildrew/Best-README-Template/issues/new?labels=bug&template=bug-report---.md">Report Bug</a>
·
<a href="https://github.com/othneildrew/Best-README-Template/issues/new?labels=enhancement&template=feature-request---.md">Request Feature</a>
</p>
</div>
<!-- TABLE OF CONTENTS -->
<details>
<summary>Table of Contents</summary>
<ol>
<li>
<a href="#about-the-project">About The Project</a>
<ul>
<li><a href="#built-with">Built With</a></li>
</ul>
</li>
<li>
<a href="#getting-started">Getting Started</a>
<ul>
<li><a href="#prerequisites">Prerequisites</a></li>
<li><a href="#installation">Installation</a></li>
</ul>
</li>
<li><a href="#usage">Usage</a></li>
<li><a href="#roadmap">Roadmap</a></li>
<li><a href="#contributing">Contributing</a></li>
<li><a href="#license">License</a></li>
<li><a href="#contact">Contact</a></li>
<li><a href="#acknowledgments">Acknowledgments</a></li>
</ol>
</details>
<!-- ABOUT THE PROJECT -->
## About The Project
[![Product Name Screen Shot][product-screenshot]](https://example.com)
There are many great README templates available on GitHub; however, I didn't find one that really suited my needs so I created this enhanced one. I want to create a README template so amazing that it'll be the last one you ever need -- I think this is it.
Here's why:
* Your time should be focused on creating something amazing. A project that solves a problem and helps others
* You shouldn't be doing the same tasks over and over like creating a README from scratch
* You should implement DRY principles to the rest of your life :smile:
Of course, no one template will serve all projects since your needs may be different. So I'll be adding more in the near future. You may also suggest changes by forking this repo and creating a pull request or opening an issue. Thanks to all the people have contributed to expanding this template!
Use the `BLANK_README.md` to get started.
<p align="right">(<a href="#readme-top">back to top</a>)</p>
### Built With
This section should list any major frameworks/libraries used to bootstrap your project. Leave any add-ons/plugins for the acknowledgements section. Here are a few examples.
* [![Next][Next.js]][Next-url]
* [![React][React.js]][React-url]
* [![Vue][Vue.js]][Vue-url]
* [![Angular][Angular.io]][Angular-url]
* [![Svelte][Svelte.dev]][Svelte-url]
* [![Laravel][Laravel.com]][Laravel-url]
* [![Bootstrap][Bootstrap.com]][Bootstrap-url]
* [![JQuery][JQuery.com]][JQuery-url]
<p align="right">(<a href="#readme-top">back to top</a>)</p>
<!-- GETTING STARTED -->
## Getting Started
This is an example of how you may give instructions on setting up your project locally.
To get a local copy up and running follow these simple example steps.
### Prerequisites
This is an example of how to list things you need to use the software and how to install them.
* npm
```sh
npm install npm@latest -g
```
### Installation
_Below is an example of how you can instruct your audience on installing and setting up your app. This template doesn't rely on any external dependencies or services._
1. Get a free API Key at [https://example.com](https://example.com)
2. Clone the repo
```sh
git clone https://github.com/your_username_/Project-Name.git
```
3. Install NPM packages
```sh
npm install
```
4. Enter your API in `config.js`
```js
const API_KEY = 'ENTER YOUR API';
```
<p align="right">(<a href="#readme-top">back to top</a>)</p>
<!-- USAGE EXAMPLES -->
## Usage
Use this space to show useful examples of how a project can be used. Additional screenshots, code examples and demos work well in this space. You may also link to more resources.
_For more examples, please refer to the [Documentation](https://example.com)_
<p align="right">(<a href="#readme-top">back to top</a>)</p>
<!-- ROADMAP -->
## Roadmap
- [x] Add Changelog
- [x] Add back to top links
- [ ] Add Additional Templates w/ Examples
- [ ] Add "components" document to easily copy & paste sections of the readme
- [ ] Multi-language Support
- [ ] Chinese
- [ ] Spanish
See the [open issues](https://github.com/othneildrew/Best-README-Template/issues) for a full list of proposed features (and known issues).
<p align="right">(<a href="#readme-top">back to top</a>)</p>
<!-- CONTRIBUTING -->
## Contributing
Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are **greatly appreciated**.
If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement".
Don't forget to give the project a star! Thanks again!
1. Fork the Project
2. Create your Feature Branch (`git checkout -b feature/AmazingFeature`)
3. Commit your Changes (`git commit -m 'Add some AmazingFeature'`)
4. Push to the Branch (`git push origin feature/AmazingFeature`)
5. Open a Pull Request
<p align="right">(<a href="#readme-top">back to top</a>)</p>
<!-- LICENSE -->
## License
Distributed under the MIT License. See `LICENSE.txt` for more information.
<p align="right">(<a href="#readme-top">back to top</a>)</p>
<!-- CONTACT -->
## Contact
Your Name - [@your_twitter](https://twitter.com/your_username) - email@example.com
Project Link: [https://github.com/your_username/repo_name](https://github.com/your_username/repo_name)
<p align="right">(<a href="#readme-top">back to top</a>)</p>
<!-- ACKNOWLEDGMENTS -->
## Acknowledgments
Use this space to list resources you find helpful and would like to give credit to. I've included a few of my favorites to kick things off!
* [Choose an Open Source License](https://choosealicense.com)
* [GitHub Emoji Cheat Sheet](https://www.webpagefx.com/tools/emoji-cheat-sheet)
* [Malven's Flexbox Cheatsheet](https://flexbox.malven.co/)
* [Malven's Grid Cheatsheet](https://grid.malven.co/)
* [Img Shields](https://shields.io)
* [GitHub Pages](https://pages.github.com)
* [Font Awesome](https://fontawesome.com)
* [React Icons](https://react-icons.github.io/react-icons/search)
<p align="right">(<a href="#readme-top">back to top</a>)</p>
<!-- MARKDOWN LINKS & IMAGES -->
<!-- https://www.markdownguide.org/basic-syntax/#reference-style-links -->
[contributors-shield]: https://img.shields.io/github/contributors/othneildrew/Best-README-Template.svg?style=for-the-badge
[contributors-url]: https://github.com/othneildrew/Best-README-Template/graphs/contributors
[forks-shield]: https://img.shields.io/github/forks/othneildrew/Best-README-Template.svg?style=for-the-badge
[forks-url]: https://github.com/othneildrew/Best-README-Template/network/members
[stars-shield]: https://img.shields.io/github/stars/othneildrew/Best-README-Template.svg?style=for-the-badge
[stars-url]: https://github.com/othneildrew/Best-README-Template/stargazers
[issues-shield]: https://img.shields.io/github/issues/othneildrew/Best-README-Template.svg?style=for-the-badge
[issues-url]: https://github.com/othneildrew/Best-README-Template/issues
[license-shield]: https://img.shields.io/github/license/othneildrew/Best-README-Template.svg?style=for-the-badge
[license-url]: https://github.com/othneildrew/Best-README-Template/blob/master/LICENSE.txt
[linkedin-shield]: https://img.shields.io/badge/-LinkedIn-black.svg?style=for-the-badge&logo=linkedin&colorB=555
[linkedin-url]: https://linkedin.com/in/othneildrew
[product-screenshot]: images/screenshot.png
[Next.js]: https://img.shields.io/badge/next.js-000000?style=for-the-badge&logo=nextdotjs&logoColor=white
[Next-url]: https://nextjs.org/
[React.js]: https://img.shields.io/badge/React-20232A?style=for-the-badge&logo=react&logoColor=61DAFB
[React-url]: https://reactjs.org/
[Vue.js]: https://img.shields.io/badge/Vue.js-35495E?style=for-the-badge&logo=vuedotjs&logoColor=4FC08D
[Vue-url]: https://vuejs.org/
[Angular.io]: https://img.shields.io/badge/Angular-DD0031?style=for-the-badge&logo=angular&logoColor=white
[Angular-url]: https://angular.io/
[Svelte.dev]: https://img.shields.io/badge/Svelte-4A4A55?style=for-the-badge&logo=svelte&logoColor=FF3E00
[Svelte-url]: https://svelte.dev/
[Laravel.com]: https://img.shields.io/badge/Laravel-FF2D20?style=for-the-badge&logo=laravel&logoColor=white
[Laravel-url]: https://laravel.com
[Bootstrap.com]: https://img.shields.io/badge/Bootstrap-563D7C?style=for-the-badge&logo=bootstrap&logoColor=white
[Bootstrap-url]: https://getbootstrap.com
[JQuery.com]: https://img.shields.io/badge/jQuery-0769AD?style=for-the-badge&logo=jquery&logoColor=white
[JQuery-url]: https://jquery.com |
Sorour/finqa-ft | Sorour | "2024-06-19T21:47:36Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2024-06-19T21:47:08Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
nanirudh/qa_model_v3 | nanirudh | "2023-08-10T05:23:57Z" | 0 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-08-10T05:23:48Z" | ---
library_name: peft
---
## Training procedure
### Framework versions
- PEFT 0.5.0.dev0
|
Sairii/learn_hf_food_not_food_text_classifier | Sairii | "2025-02-13T10:35:31Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-02-12T16:56:23Z" | ---
library_name: transformers
license: apache-2.0
base_model: distilbert/distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: learn_hf_food_not_food_text_classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# learn_hf_food_not_food_text_classifier
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://huggingface.co/distilbert/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0004
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3329 | 1.0 | 7 | 0.0339 | 1.0 |
| 0.0182 | 2.0 | 14 | 0.0045 | 1.0 |
| 0.0036 | 3.0 | 21 | 0.0016 | 1.0 |
| 0.0016 | 4.0 | 28 | 0.0009 | 1.0 |
| 0.001 | 5.0 | 35 | 0.0007 | 1.0 |
| 0.0008 | 6.0 | 42 | 0.0006 | 1.0 |
| 0.0007 | 7.0 | 49 | 0.0005 | 1.0 |
| 0.0006 | 8.0 | 56 | 0.0005 | 1.0 |
| 0.0006 | 9.0 | 63 | 0.0004 | 1.0 |
| 0.0006 | 10.0 | 70 | 0.0004 | 1.0 |
### Framework versions
- Transformers 4.48.2
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
earnxus/2bf8494e-1082-447d-9200-beaddbe8ac0d | earnxus | "2025-02-02T22:21:19Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"mistral",
"axolotl",
"generated_from_trainer",
"base_model:lcw99/zephykor-ko-7b-chang",
"base_model:adapter:lcw99/zephykor-ko-7b-chang",
"8-bit",
"bitsandbytes",
"region:us"
] | null | "2025-02-02T21:56:57Z" | ---
library_name: peft
base_model: lcw99/zephykor-ko-7b-chang
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 2bf8494e-1082-447d-9200-beaddbe8ac0d
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: lcw99/zephykor-ko-7b-chang
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 8428966f8aadfc44_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/8428966f8aadfc44_train_data.json
type:
field_instruction: topic
field_output: prompt
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
device_map: auto
do_eval: true
early_stopping_patience: null
eval_batch_size: 2
eval_max_new_tokens: 128
eval_steps: null
eval_table_size: null
evals_per_epoch: null
flash_attention: true
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: true
gradient_clipping: 1.0
group_by_length: true
hub_model_id: earnxus/2bf8494e-1082-447d-9200-beaddbe8ac0d
hub_repo: null
hub_strategy: end
hub_token: null
learning_rate: 0.0001
load_in_4bit: true
load_in_8bit: true
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_memory:
0: 75GB
max_steps: 200
micro_batch_size: 2
mlflow_experiment_name: /tmp/8428966f8aadfc44_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: null
saves_per_epoch: null
sequence_len: 1024
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: techspear-hub
wandb_mode: online
wandb_name: 08e13afd-6a82-4cfe-b2b2-25f8ccca840c
wandb_project: Gradients-On-Nine
wandb_run: your_name
wandb_runid: 08e13afd-6a82-4cfe-b2b2-25f8ccca840c
warmup_steps: 5
weight_decay: 0.01
xformers_attention: null
```
</details><br>
# 2bf8494e-1082-447d-9200-beaddbe8ac0d
This model is a fine-tuned version of [lcw99/zephykor-ko-7b-chang](https://huggingface.co/lcw99/zephykor-ko-7b-chang) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7621
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 16.8347 | 0.1663 | 200 | 2.7621 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
duja1/collage | duja1 | "2023-05-16T09:37:12Z" | 32 | 3 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2022-12-13T11:09:16Z" | ---
license: creativeml-openrail-m
tags:
- text-to-image
widget:
- text: c123ollage
---
### collage Dreambooth model trained by duja1 with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Sample pictures of:
c123ollage (use that on your prompt) |
muhtasham/hifigan-ar-v2 | muhtasham | "2025-03-30T22:24:43Z" | 2 | 1 | null | [
"speech",
"audio",
"vocoder",
"hifigan",
"tts",
"en",
"license:mit",
"region:us"
] | null | "2025-03-27T22:07:21Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:website@huggingface.co">an email</a></p>
</div>
</main>
</body>
</html> |
simple-sf/sf-test-model-1.0.0 | simple-sf | "2024-04-24T03:19:11Z" | 0 | 0 | null | [
"zh",
"dataset:HuggingFaceM4/the_cauldron",
"license:apache-2.0",
"region:us"
] | null | "2024-04-24T03:15:45Z" | ---
license: apache-2.0
datasets:
- HuggingFaceM4/the_cauldron
language:
- zh
--- |
yusheng123z/llama3.1 | yusheng123z | "2024-09-20T02:22:25Z" | 9 | 1 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"llama-factory",
"orpo",
"conversational",
"en",
"zh",
"arxiv:2403.07691",
"base_model:meta-llama/Llama-3.1-8B-Instruct",
"base_model:finetune:meta-llama/Llama-3.1-8B-Instruct",
"license:llama3.1",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2024-09-19T15:23:31Z" | ---
license: llama3.1
library_name: transformers
pipeline_tag: text-generation
base_model: meta-llama/Meta-Llama-3.1-8B-Instruct
language:
- en
- zh
tags:
- llama-factory
- orpo
---
> [!CAUTION]
> For optimal performance, we refrain from fine-tuning the model's identity. Thus, inquiries such as "Who are you" or "Who developed you" may yield random responses that are not necessarily accurate.
> [!IMPORTANT]
> If you enjoy our model, please **give it a star on our Hugging Face repo** and kindly [**cite our model**](https://huggingface.co/shenzhi-wang/Llama3.1-8B-Chinese-Chat#citation). Your support means a lot to us. Thank you!
# Updates
- 🚀🚀🚀 [July 24, 2024] We now introduce [shenzhi-wang/Llama3.1-8B-Chinese-Chat](https://huggingface.co/shenzhi-wang/Llama3.1-8B-Chinese-Chat)! The training dataset contains >100K preference pairs, and it exhibits significant enhancements, especially in **roleplay**, **function calling**, and **math** capabilities!
- 🔥 We provide the official **q4_k_m, q8_0, and f16 GGUF** versions of Llama3.1-8B-Chinese-Chat-**v2.1** at https://huggingface.co/shenzhi-wang/Llama3.1-8B-Chinese-Chat/tree/main/gguf!
# Model Summary
llama3.1-8B-Chinese-Chat is an instruction-tuned language model for Chinese & English users with various abilities such as roleplaying & tool-using built upon the Meta-Llama-3.1-8B-Instruct model.
Developers: [Shenzhi Wang](https://shenzhi-wang.netlify.app)\*, [Yaowei Zheng](https://github.com/hiyouga)\*, Guoyin Wang (in.ai), Shiji Song, Gao Huang. (\*: Equal Contribution)
- License: [Llama-3.1 License](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B/blob/main/LICENSE)
- Base Model: Meta-Llama-3.1-8B-Instruct
- Model Size: 8.03B
- Context length: 128K (reported by [Meta-Llama-3.1-8B-Instruct model](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct), untested for our Chinese model)
# 1. Introduction
This is the first model specifically fine-tuned for Chinese & English users based on the [Meta-Llama-3.1-8B-Instruct model](https://huggingface.co/meta-llama/Meta-Llama-3.1-8B-Instruct). The fine-tuning algorithm used is ORPO [1].
[1] Hong, Jiwoo, Noah Lee, and James Thorne. "Reference-free Monolithic Preference Optimization with Odds Ratio." arXiv preprint arXiv:2403.07691 (2024).
Training framework: [LLaMA-Factory](https://github.com/hiyouga/LLaMA-Factory).
Training details:
- epochs: 3
- learning rate: 3e-6
- learning rate scheduler type: cosine
- Warmup ratio: 0.1
- cutoff len (i.e. context length): 8192
- orpo beta (i.e. $\lambda$ in the ORPO paper): 0.05
- global batch size: 128
- fine-tuning type: full parameters
- optimizer: paged_adamw_32bit
# 2. Usage
## 2.1 Usage of Our BF16 Model
1. Please upgrade the `transformers` package to ensure it supports Llama3.1 models. The current version we are using is `4.43.0`.
2. Use the following Python script to download our BF16 model
```python
from huggingface_hub import snapshot_download
snapshot_download(repo_id="shenzhi-wang/Llama3.1-8B-Chinese-Chat", ignore_patterns=["*.gguf"]) # Download our BF16 model without downloading GGUF models.
```
3. Inference with the BF16 model
```python
import torch
import transformers
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "/Your/Local/Path/to/Llama3.1-8B-Chinese-Chat"
dtype = torch.bfloat16
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="cuda",
torch_dtype=dtype,
)
chat = [
{"role": "user", "content": "写一首关于机器学习的诗。"},
]
input_ids = tokenizer.apply_chat_template(
chat, tokenize=True, add_generation_prompt=True, return_tensors="pt"
).to(model.device)
outputs = model.generate(
input_ids,
max_new_tokens=8192,
do_sample=True,
temperature=0.6,
top_p=0.9,
)
response = outputs[0][input_ids.shape[-1] :]
print(tokenizer.decode(response, skip_special_tokens=True))
```
## 2.2 Usage of Our GGUF Models
1. Download our GGUF models from the [gguf_models folder](https://huggingface.co/shenzhi-wang/Llama3.1-8B-Chinese-Chat/tree/main/gguf);
2. Use the GGUF models with [LM Studio](https://lmstudio.ai/);
3. You can also follow the instructions from https://github.com/ggerganov/llama.cpp/tree/master#usage to use gguf models.
# Citation
If our Llama3.1-8B-Chinese-Chat is helpful, please kindly cite as:
```
@misc {shenzhi_wang_2024,
author = { Wang, Shenzhi and Zheng, Yaowei and Wang, Guoyin and Song, Shiji and Huang, Gao },
title = { Llama3.1-8B-Chinese-Chat },
year = 2024,
url = { https://huggingface.co/shenzhi-wang/Llama3.1-8B-Chinese-Chat },
doi = { 10.57967/hf/2779 },
publisher = { Hugging Face }
}
```
|
mradermacher/calculator_agent_qwen2.5_0.5b-GGUF | mradermacher | "2025-05-01T14:39:22Z" | 333 | 0 | transformers | [
"transformers",
"gguf",
"agent",
"grpo",
"multi-turn-rl",
"en",
"base_model:Dan-AiTuning/calculator_agent_qwen2.5_0.5b",
"base_model:quantized:Dan-AiTuning/calculator_agent_qwen2.5_0.5b",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-04-26T16:05:42Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:website@huggingface.co">an email</a></p>
</div>
</main>
</body>
</html> |
Holarissun/RM-HH-Mix_harmless_gpt3_20000_gemma2b_shuffleFalse_extractchosenFalse | Holarissun | "2024-04-19T23:34:02Z" | 3 | 0 | peft | [
"peft",
"safetensors",
"trl",
"reward-trainer",
"generated_from_trainer",
"base_model:google/gemma-2b",
"base_model:adapter:google/gemma-2b",
"license:gemma",
"region:us"
] | null | "2024-04-19T23:33:59Z" | ---
license: gemma
library_name: peft
tags:
- trl
- reward-trainer
- generated_from_trainer
metrics:
- accuracy
base_model: google/gemma-2b
model-index:
- name: RM-HH-Mix_harmless_gpt3_20000_gemma2b_shuffleFalse_extractchosenFalse
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# RM-HH-Mix_harmless_gpt3_20000_gemma2b_shuffleFalse_extractchosenFalse
This model is a fine-tuned version of [google/gemma-2b](https://huggingface.co/google/gemma-2b) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0445
- Accuracy: 0.9815
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.41e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.8191 | 0.06 | 250 | 0.5824 | 0.695 |
| 0.6294 | 0.11 | 500 | 0.1346 | 0.953 |
| 0.5811 | 0.17 | 750 | 0.0888 | 0.9705 |
| 0.5753 | 0.22 | 1000 | 0.0684 | 0.975 |
| 0.5539 | 0.28 | 1250 | 0.0588 | 0.979 |
| 0.5764 | 0.33 | 1500 | 0.0595 | 0.9785 |
| 0.5261 | 0.39 | 1750 | 0.0558 | 0.979 |
| 0.5423 | 0.44 | 2000 | 0.0533 | 0.9795 |
| 0.5261 | 0.5 | 2250 | 0.0501 | 0.98 |
| 0.5363 | 0.56 | 2500 | 0.0485 | 0.98 |
| 0.5051 | 0.61 | 2750 | 0.0472 | 0.981 |
| 0.5157 | 0.67 | 3000 | 0.0509 | 0.9795 |
| 0.5368 | 0.72 | 3250 | 0.0507 | 0.9785 |
| 0.5281 | 0.78 | 3500 | 0.0467 | 0.981 |
| 0.5005 | 0.83 | 3750 | 0.0450 | 0.9815 |
| 0.5239 | 0.89 | 4000 | 0.0445 | 0.9815 |
| 0.5111 | 0.94 | 4250 | 0.0445 | 0.9815 |
### Framework versions
- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.2 |
keke1234/Hao-chi-mi-hunhe-kongzhi | keke1234 | "2025-04-18T10:39:00Z" | 0 | 0 | null | [
"license:mit",
"region:us"
] | null | "2025-04-18T10:33:29Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:website@huggingface.co">an email</a></p>
</div>
</main>
</body>
</html> |
gustavokpc/IC_primeiro | gustavokpc | "2024-01-11T18:06:06Z" | 3 | 0 | transformers | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-10-20T15:22:09Z" | ---
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_keras_callback
model-index:
- name: gustavokpc/IC_primeiro
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# gustavokpc/IC_primeiro
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0532
- Train Accuracy: 0.9812
- Train F1 M: 0.5544
- Train Precision M: 0.4027
- Train Recall M: 0.9558
- Validation Loss: 0.2580
- Validation Accuracy: 0.9175
- Validation F1 M: 0.5588
- Validation Precision M: 0.4059
- Validation Recall M: 0.9423
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'module': 'keras.optimizers.schedules', 'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 3790, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, 'registered_name': None}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Train F1 M | Train Precision M | Train Recall M | Validation Loss | Validation Accuracy | Validation F1 M | Validation Precision M | Validation Recall M | Epoch |
|:----------:|:--------------:|:----------:|:-----------------:|:--------------:|:---------------:|:-------------------:|:---------------:|:----------------------:|:-------------------:|:-----:|
| 0.3533 | 0.8498 | 0.4723 | 0.4085 | 0.6530 | 0.2424 | 0.9037 | 0.5060 | 0.3909 | 0.7591 | 0 |
| 0.1974 | 0.9259 | 0.5184 | 0.3930 | 0.8161 | 0.1978 | 0.9202 | 0.5425 | 0.4014 | 0.8778 | 1 |
| 0.1242 | 0.9551 | 0.5382 | 0.3974 | 0.8918 | 0.1970 | 0.9248 | 0.5583 | 0.4106 | 0.9195 | 2 |
| 0.0823 | 0.9705 | 0.5511 | 0.4024 | 0.9370 | 0.2550 | 0.9116 | 0.5567 | 0.4057 | 0.9330 | 3 |
| 0.0532 | 0.9812 | 0.5544 | 0.4027 | 0.9558 | 0.2580 | 0.9175 | 0.5588 | 0.4059 | 0.9423 | 4 |
### Framework versions
- Transformers 4.34.1
- TensorFlow 2.14.0
- Datasets 2.14.5
- Tokenizers 0.14.1
|
VPTQ-community/Qwen2.5-72B-Instruct-v16-k65536-256-woft | VPTQ-community | "2025-03-20T05:21:46Z" | 5 | 0 | null | [
"safetensors",
"qwen2",
"license:other",
"vptq",
"region:us"
] | null | "2024-09-25T07:59:11Z" | ---
license: other
license_name: qwen
license_link: https://huggingface.co/Qwen/Qwen2.5-72B-Instruct/blob/main/LICENSE
---
|
kostiantynk1205/825fad04-f5da-4163-bd36-750515372a8f | kostiantynk1205 | "2025-01-24T22:43:24Z" | 7 | 0 | peft | [
"peft",
"safetensors",
"codegen",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:katuni4ka/tiny-random-codegen2",
"base_model:adapter:katuni4ka/tiny-random-codegen2",
"region:us"
] | null | "2025-01-24T22:43:01Z" | ---
library_name: peft
base_model: katuni4ka/tiny-random-codegen2
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 825fad04-f5da-4163-bd36-750515372a8f
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: katuni4ka/tiny-random-codegen2
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- e84b064650c996c3_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/e84b064650c996c3_train_data.json
type:
field_instruction: question
field_output: answer
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: kostiantynk1205/825fad04-f5da-4163-bd36-750515372a8f
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/e84b064650c996c3_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: <|endoftext|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: 790d6a39-0dd2-4b13-bb2e-64fab612f643
wandb_project: Birthday-SN56-6-Gradients-On-Demand
wandb_run: your_name
wandb_runid: 790d6a39-0dd2-4b13-bb2e-64fab612f643
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 825fad04-f5da-4163-bd36-750515372a8f
This model is a fine-tuned version of [katuni4ka/tiny-random-codegen2](https://huggingface.co/katuni4ka/tiny-random-codegen2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 10.8575
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 43.4033 | 0.0010 | 1 | 10.8594 |
| 43.4676 | 0.0029 | 3 | 10.8593 |
| 43.4053 | 0.0059 | 6 | 10.8586 |
| 43.3929 | 0.0088 | 9 | 10.8575 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
zipp425/synthwavePunk-v3a | zipp425 | "2023-01-19T03:31:01Z" | 3 | 0 | diffusers | [
"diffusers",
"license:creativeml-openrail-m",
"autotrain_compatible",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | text-to-image | "2023-01-18T15:59:21Z" | ---
license: creativeml-openrail-m
---
|
Shannonjunior/886e3b6b-61bf-40a8-8ce2-5abcdd7a1fc6 | Shannonjunior | "2025-04-07T07:23:03Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-04-07T07:22:42Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:website@huggingface.co">an email</a></p>
</div>
</main>
</body>
</html> |
mradermacher/Qwen2.5-ColdBrew-R1-GGUF | mradermacher | "2025-01-30T07:05:00Z" | 282 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:Theros/Qwen2.5-ColdBrew-R1",
"base_model:quantized:Theros/Qwen2.5-ColdBrew-R1",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-01-30T06:49:07Z" | ---
base_model: Theros/Qwen2.5-ColdBrew-R1
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
static quants of https://huggingface.co/Theros/Qwen2.5-ColdBrew-R1
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-ColdBrew-R1-GGUF/resolve/main/Qwen2.5-ColdBrew-R1.Q2_K.gguf) | Q2_K | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-ColdBrew-R1-GGUF/resolve/main/Qwen2.5-ColdBrew-R1.Q3_K_S.gguf) | Q3_K_S | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-ColdBrew-R1-GGUF/resolve/main/Qwen2.5-ColdBrew-R1.Q3_K_M.gguf) | Q3_K_M | 3.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-ColdBrew-R1-GGUF/resolve/main/Qwen2.5-ColdBrew-R1.Q3_K_L.gguf) | Q3_K_L | 4.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-ColdBrew-R1-GGUF/resolve/main/Qwen2.5-ColdBrew-R1.IQ4_XS.gguf) | IQ4_XS | 4.3 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-ColdBrew-R1-GGUF/resolve/main/Qwen2.5-ColdBrew-R1.Q4_K_S.gguf) | Q4_K_S | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-ColdBrew-R1-GGUF/resolve/main/Qwen2.5-ColdBrew-R1.Q4_K_M.gguf) | Q4_K_M | 4.8 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-ColdBrew-R1-GGUF/resolve/main/Qwen2.5-ColdBrew-R1.Q5_K_S.gguf) | Q5_K_S | 5.4 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-ColdBrew-R1-GGUF/resolve/main/Qwen2.5-ColdBrew-R1.Q5_K_M.gguf) | Q5_K_M | 5.5 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-ColdBrew-R1-GGUF/resolve/main/Qwen2.5-ColdBrew-R1.Q6_K.gguf) | Q6_K | 6.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-ColdBrew-R1-GGUF/resolve/main/Qwen2.5-ColdBrew-R1.Q8_0.gguf) | Q8_0 | 8.2 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-ColdBrew-R1-GGUF/resolve/main/Qwen2.5-ColdBrew-R1.f16.gguf) | f16 | 15.3 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
keremnazliel/distilbert_squad_for_musique_6 | keremnazliel | "2023-06-21T19:05:34Z" | 103 | 0 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | question-answering | "2023-06-21T18:39:04Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbert_squad_for_musique_6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_squad_for_musique_6
This model is a fine-tuned version of [distilbert-base-cased-distilled-squad](https://huggingface.co/distilbert-base-cased-distilled-squad) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.1
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.30.2
- Pytorch 2.0.1+cu118
- Datasets 2.13.0
- Tokenizers 0.13.3
|
deeponh/malayalam_gemma_NORMAL_distil_9b_9b_R3 | deeponh | "2025-04-14T14:20:19Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-04-14T14:13:15Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:website@huggingface.co">an email</a></p>
</div>
</main>
</body>
</html> |
sajjadi/timm-vit_large_patch16_224.mae-lora | sajjadi | "2025-04-30T22:36:12Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"generated_from_trainer",
"region:us"
] | null | "2025-04-30T21:01:14Z" | ---
base_model: vit_large_patch16_224.mae
library_name: peft
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: timm-vit_large_patch16_224.mae-lora
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/sajjadi/Fast-PEFT/runs/4rlmh39q)
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="200" height="32"/>](https://wandb.ai/sajjadi/Fast-PEFT/runs/4rlmh39q)
# timm-vit_large_patch16_224.mae-lora
This model is a fine-tuned version of [vit_large_patch16_224.mae](https://huggingface.co/vit_large_patch16_224.mae) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3588
- Accuracy: 0.902
- Solar Loss: 2.1634
- Solar Accuracy: 0.249
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.002
- train_batch_size: 128
- eval_batch_size: 128
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 512
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Loss |
|:-------------:|:------:|:----:|:---------------:|:--------:|:------:|
| 0.6834 | 0.9923 | 97 | 0.3588 | 0.249 | 2.1634 |
### Framework versions
- PEFT 0.14.0
- Transformers 4.47.1
- Pytorch 2.5.1+cu121
- Datasets 3.0.1
- Tokenizers 0.21.0 |
mradermacher/openbuddy-llama3.2-3b-v23.1-131k-i1-GGUF | mradermacher | "2025-04-01T07:10:30Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama-3.2",
"zh",
"en",
"fr",
"de",
"ja",
"ko",
"it",
"fi",
"base_model:OpenBuddy/openbuddy-llama3.2-3b-v23.1-131k",
"base_model:quantized:OpenBuddy/openbuddy-llama3.2-3b-v23.1-131k",
"license:llama3.2",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2025-04-01T06:30:57Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:website@huggingface.co">an email</a></p>
</div>
</main>
</body>
</html> |
PrunaAI/abhishek-autotrain-llama3-no-robots-QUANTO-int2bit-smashed | PrunaAI | "2024-07-19T09:30:02Z" | 7 | 0 | transformers | [
"transformers",
"pruna-ai",
"base_model:abhishek/autotrain-llama3-no-robots",
"base_model:finetune:abhishek/autotrain-llama3-no-robots",
"endpoints_compatible",
"region:us"
] | null | "2024-07-17T20:59:40Z" | ---
thumbnail: "https://assets-global.website-files.com/646b351987a8d8ce158d1940/64ec9e96b4334c0e1ac41504_Logo%20with%20white%20text.svg"
base_model: abhishek/autotrain-llama3-no-robots
metrics:
- memory_disk
- memory_inference
- inference_latency
- inference_throughput
- inference_CO2_emissions
- inference_energy_consumption
tags:
- pruna-ai
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<a href="https://www.pruna.ai/" target="_blank" rel="noopener noreferrer">
<img src="https://i.imgur.com/eDAlcgk.png" alt="PrunaAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</a>
</div>
<!-- header end -->
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/rskEr4BZJx)
# Simply make AI models cheaper, smaller, faster, and greener!
- Give a thumbs up if you like this model!
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your *own* AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- Read the documentations to know more [here](https://pruna-ai-pruna.readthedocs-hosted.com/en/latest/)
- Join Pruna AI community on Discord [here](https://discord.gg/CP4VSgck) to share feedback/suggestions or get help.
## Results

**Frequently Asked Questions**
- ***How does the compression work?*** The model is compressed with quanto.
- ***How does the model quality change?*** The quality of the model output might vary compared to the base model.
- ***How is the model efficiency evaluated?*** These results were obtained on HARDWARE_NAME with configuration described in `model/smash_config.json` and are obtained after a hardware warmup. The smashed model is directly compared to the original base model. Efficiency results may vary in other settings (e.g. other hardware, image size, batch size, ...). We recommend to directly run them in the use-case conditions to know if the smashed model can benefit you.
- ***What is the model format?*** We use safetensors.
- ***What calibration data has been used?*** If needed by the compression method, we used WikiText as the calibration data.
- ***What is the naming convention for Pruna Huggingface models?*** We take the original model name and append "turbo", "tiny", or "green" if the smashed model has a measured inference speed, inference memory, or inference energy consumption which is less than 90% of the original base model.
- ***How to compress my own models?*** You can request premium access to more compression methods and tech support for your specific use-cases [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai).
- ***What are "first" metrics?*** Results mentioning "first" are obtained after the first run of the model. The first run might take more memory or be slower than the subsequent runs due cuda overheads.
- ***What are "Sync" and "Async" metrics?*** "Sync" metrics are obtained by syncing all GPU processes and stop measurement when all of them are executed. "Async" metrics are obtained without syncing all GPU processes and stop when the model output can be used by the CPU. We provide both metrics since both could be relevant depending on the use-case. We recommend to test the efficiency gains directly in your use-cases.
## Setup
You can run the smashed model with these steps:
0. Check requirements from the original repo abhishek/autotrain-llama3-no-robots installed. In particular, check python, cuda, and transformers versions.
1. Make sure that you have installed quantization related packages.
```bash
pip install quanto
```
2. Load & run the model.
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
IMPORTS
model = AutoModelForCausalLM.from_pretrained("PrunaAI/abhishek-autotrain-llama3-no-robots-QUANTO-int2bit-smashed", trust_remote_code=True, device_map='auto')
tokenizer = AutoTokenizer.from_pretrained("abhishek/autotrain-llama3-no-robots")
input_ids = tokenizer("What is the color of prunes?,", return_tensors='pt').to(model.device)["input_ids"]
outputs = model.generate(input_ids, max_new_tokens=216)
tokenizer.decode(outputs[0])
```
## Configurations
The configuration info are in `smash_config.json`.
## Credits & License
The license of the smashed model follows the license of the original model. Please check the license of the original model abhishek/autotrain-llama3-no-robots before using this model which provided the base model. The license of the `pruna-engine` is [here](https://pypi.org/project/pruna-engine/) on Pypi.
## Want to compress other models?
- Contact us and tell us which model to compress next [here](https://www.pruna.ai/contact).
- Request access to easily compress your own AI models [here](https://z0halsaff74.typeform.com/pruna-access?typeform-source=www.pruna.ai). |
jonas-luehrs/bert-base-uncased-MLP-scirepeval-chemistry-LARGE-textCLS-RHEOLOGY-20230913-3 | jonas-luehrs | "2023-09-13T14:04:58Z" | 112 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:bluesky333/chemical_language_understanding_benchmark",
"base_model:jonas-luehrs/bert-base-uncased-MLP-scirepeval-chemistry-LARGE",
"base_model:finetune:jonas-luehrs/bert-base-uncased-MLP-scirepeval-chemistry-LARGE",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2023-09-13T13:04:04Z" | ---
license: apache-2.0
base_model: jonas-luehrs/bert-base-uncased-MLP-scirepeval-chemistry-LARGE
tags:
- generated_from_trainer
metrics:
- f1
- precision
- recall
- accuracy
model-index:
- name: bert-base-uncased-MLP-scirepeval-chemistry-LARGE-textCLS-RHEOLOGY-20230913-3
results: []
datasets:
- bluesky333/chemical_language_understanding_benchmark
language:
- en
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-MLP-scirepeval-chemistry-LARGE-textCLS-RHEOLOGY-20230913-3
This model is a fine-tuned version of [jonas-luehrs/bert-base-uncased-MLP-scirepeval-chemistry-LARGE](https://huggingface.co/jonas-luehrs/bert-base-uncased-MLP-scirepeval-chemistry-LARGE) on the RHEOLOGY dataset of the [blue333/chemical_language_understanding_benchmark](https://huggingface.co/datasets/bluesky333/chemical_language_understanding_benchmark).
It achieves the following results on the evaluation set:
- Loss: 0.6836
- F1: 0.7805
- Precision: 0.7860
- Recall: 0.7840
- Accuracy: 0.7840
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Precision | Recall | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:---------:|:------:|:--------:|
| 1.1777 | 1.0 | 46 | 0.8465 | 0.6593 | 0.6346 | 0.7037 | 0.7037 |
| 0.6923 | 2.0 | 92 | 0.7123 | 0.7491 | 0.7654 | 0.7593 | 0.7593 |
| 0.4974 | 3.0 | 138 | 0.6906 | 0.7563 | 0.7667 | 0.7593 | 0.7593 |
| 0.3789 | 4.0 | 184 | 0.6754 | 0.7645 | 0.7712 | 0.7716 | 0.7716 |
| 0.3053 | 5.0 | 230 | 0.6836 | 0.7805 | 0.7860 | 0.7840 | 0.7840 |
### Framework versions
- Transformers 4.33.1
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.13.3 |
RetaSy/whisper-test-ar-tarteel | RetaSy | "2022-12-15T11:01:43Z" | 6 | 1 | transformers | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2022-12-15T10:47:09Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: whisper-test-ar-tarteel
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-test-ar-tarteel
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
MrRobotoAI/123-Q4_K_M-GGUF | MrRobotoAI | "2025-04-25T09:37:58Z" | 143 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:MrRobotoAI/123",
"base_model:quantized:MrRobotoAI/123",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-04-25T09:37:35Z" | ---
base_model: MrRobotoAI/123
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# MrRobotoAI/123-Q4_K_M-GGUF
This model was converted to GGUF format from [`MrRobotoAI/123`](https://huggingface.co/MrRobotoAI/123) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/MrRobotoAI/123) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo MrRobotoAI/123-Q4_K_M-GGUF --hf-file 123-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo MrRobotoAI/123-Q4_K_M-GGUF --hf-file 123-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo MrRobotoAI/123-Q4_K_M-GGUF --hf-file 123-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo MrRobotoAI/123-Q4_K_M-GGUF --hf-file 123-q4_k_m.gguf -c 2048
```
|
espnet/khassan_KSC_transformer | espnet | "2023-05-16T09:52:58Z" | 0 | 1 | espnet | [
"espnet",
"tensorboard",
"automatic-speech-recognition",
"kk",
"license:cc-by-4.0",
"region:us"
] | automatic-speech-recognition | "2023-05-15T08:40:55Z" | ---
license: cc-by-4.0
language:
- kk
metrics:
- wer
- cer
library_name: espnet
pipeline_tag: automatic-speech-recognition
---
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Mon May 15 16:32:55 CST 2023`
- python version: `3.8.16 (default, Mar 2 2023, 03:21:46) [GCC 11.2.0]`
- espnet version: `espnet 202304`
- pytorch version: `pytorch 1.13.1`
- Git hash: `3949a7db023d591e91627efb997eda353b54005d`
- Commit date: `Thu May 11 17:54:50 2023 +0800`
## exp/asr_train_raw_bpe2000_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_model_valid.acc.ave/test|3334|35884|90.6|8.6|0.8|1.1|10.5|55.1|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_model_valid.acc.ave/test|3334|259552|97.9|1.2|0.9|0.8|2.9|55.1|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_model_valid.acc.ave/test|3334|71707|91.9|5.6|2.5|1.1|9.2|55.1|
## exp/asr_train_raw_bpe2000_sp/decode_asr_model_valid.acc.ave
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|org/dev|3283|35275|89.0|10.0|1.0|1.2|12.1|59.0|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|org/dev|3283|253600|97.4|1.4|1.2|1.0|3.5|59.0|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|org/dev|3283|69428|90.5|6.7|2.8|1.3|10.8|59.0| |
shivanikerai/Llama-2-7b-chat-hf-adapter-banner-ocr-ner-v1 | shivanikerai | "2024-01-04T05:28:04Z" | 1 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:adapter:meta-llama/Llama-2-7b-chat-hf",
"region:us"
] | null | "2024-01-04T05:27:48Z" | ---
library_name: peft
base_model: meta-llama/Llama-2-7b-chat-hf
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.7.2.dev0 |
numinousmuses/Levlex-Math-14B-v1-16bit | numinousmuses | "2025-03-03T19:13:05Z" | 0 | 0 | transformers | [
"transformers",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/phi-4-bnb-4bit",
"base_model:finetune:unsloth/phi-4-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-03-03T19:13:00Z" | ---
base_model: unsloth/phi-4-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** numinousmuses
- **License:** apache-2.0
- **Finetuned from model :** unsloth/phi-4-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Grubbe2/dqn-SpaceInvadersNoFrameskip-v4 | Grubbe2 | "2024-03-15T12:35:27Z" | 0 | 0 | stable-baselines3 | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2024-03-15T12:34:53Z" | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 709.00 +/- 176.25
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Grubbe2 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Grubbe2 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Grubbe2
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
tonyshelby/Llama_3.2_3B_gguf_final | tonyshelby | "2025-03-24T10:50:56Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Llama-3.2-3B",
"base_model:quantized:unsloth/Llama-3.2-3B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-03-24T10:50:06Z" | ---
base_model: unsloth/Llama-3.2-3B
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** tonyshelby
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Llama-3.2-3B
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
kostiantynk/65ec7743-3eb1-43ec-9081-3946f44a50b9 | kostiantynk | "2025-01-28T00:09:31Z" | 7 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:01-ai/Yi-1.5-9B-Chat-16K",
"base_model:adapter:01-ai/Yi-1.5-9B-Chat-16K",
"license:apache-2.0",
"region:us"
] | null | "2025-01-28T00:04:30Z" | ---
library_name: peft
license: apache-2.0
base_model: 01-ai/Yi-1.5-9B-Chat-16K
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 65ec7743-3eb1-43ec-9081-3946f44a50b9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: 01-ai/Yi-1.5-9B-Chat-16K
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 656aeb34f8bb5745_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/656aeb34f8bb5745_train_data.json
type:
field_instruction: title
field_output: text
format: '{instruction}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 2
gradient_checkpointing: false
group_by_length: false
hub_model_id: kostiantynk/65ec7743-3eb1-43ec-9081-3946f44a50b9
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 10
lora_alpha: 32
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 16
lora_target_linear: true
lr_scheduler: cosine
max_steps: 50
micro_batch_size: 2
mlflow_experiment_name: /tmp/656aeb34f8bb5745_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: ba15d1f6-1b00-495f-b909-7674b8afcf2f
wandb_project: Mine-SN56-22-Gradients-On-Demand
wandb_run: your_name
wandb_runid: ba15d1f6-1b00-495f-b909-7674b8afcf2f
warmup_steps: 5
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 65ec7743-3eb1-43ec-9081-3946f44a50b9
This model is a fine-tuned version of [01-ai/Yi-1.5-9B-Chat-16K](https://huggingface.co/01-ai/Yi-1.5-9B-Chat-16K) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5206
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 4
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 5
- training_steps: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0002 | 1 | 2.9460 |
| 2.1829 | 0.0020 | 13 | 0.6620 |
| 0.6131 | 0.0040 | 26 | 0.5328 |
| 0.4969 | 0.0060 | 39 | 0.5206 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
dim/xglm-4.5b_dolly_oasst1_chip2 | dim | "2023-09-20T10:38:18Z" | 0 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-09-20T10:37:14Z" | ---
library_name: peft
---
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
import torch
from peft import PeftModel, PeftConfig
class GoralConversation:
def __init__(
self,
message_template=" <s> {role}\n{content} </s>\n",
system_prompt="Ты — Горал, русскоязычный автоматический ассистент. Ты разговариваешь с людьми и помогаешь им.",
start_token_id=1,
bot_token_id=9225,
):
self.message_template = message_template
self.start_token_id = start_token_id
self.bot_token_id = bot_token_id
self.messages = [{"role": "system", "content": system_prompt}]
def get_start_token_id(self):
return self.start_token_id
def get_bot_token_id(self):
return self.bot_token_id
def add_user_message(self, message):
self.messages.append({"role": "user", "content": message})
def add_bot_message(self, message):
self.messages.append({"role": "bot", "content": message})
def get_prompt(self, tokenizer):
final_text = ""
for message in self.messages:
message_text = self.message_template.format(**message)
final_text += message_text
final_text += tokenizer.decode(
[
self.start_token_id,
]
)
final_text += " "
final_text += tokenizer.decode([self.bot_token_id])
return final_text.strip()
def generate(model, tokenizer, prompt, generation_config):
data = tokenizer(
prompt,
return_tensors="pt",
truncation=True,
max_length=2048,
)
data = {k: v.to(model.device) for k, v in data.items()}
output_ids = model.generate(**data, generation_config=generation_config)[0]
output_ids = output_ids[len(data["input_ids"][0]) :]
output = tokenizer.decode(output_ids, skip_special_tokens=True)
return output.strip()
weights_path = "dim/xglm-4.5b_dolly_oasst1_chip2"
access_token = ""
config = PeftConfig.from_pretrained(weights_path)
model = AutoModelForCausalLM.from_pretrained(
config.base_model_name_or_path,
load_in_8bit=True,
torch_dtype=torch.float16,
device_map={"": 0},
token=access_token,
)
model = PeftModel.from_pretrained(
model,
weights_path,
torch_dtype=torch.float16,
)
model.eval()
tokenizer = AutoTokenizer.from_pretrained(weights_path)
generation_config = GenerationConfig.from_pretrained(weights_path)
generation_config.do_sample = False
inp = "Напишите интересный пост в блоге о недавней поездке на Гавайи, рассказывая о культурном опыте и достопримечательностях, которые обязательно нужно увидеть."
conversation = GoralConversation(
start_token_id=0,
bot_token_id=7425,
)
conversation.add_user_message(inp)
prompt = conversation.get_prompt(tokenizer)
output = generate(model, tokenizer, prompt, generation_config)
print(inp)
print(output)
# Я был там! Это было незабываемое путешествие, которое я никогда не забуду. Мы посетили все основные достопримечательности острова, включая пляжи, вулканы, пещеры, национальные парки и многое другое. Впечатления от посещения были потрясающими, а культура - уникальной. Поездка была отличным способом исследовать остров и узнать больше об истории его жителей. Надеюсь, что вы также захотите посетить это место!
```
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- load_in_8bit: True
- load_in_4bit: False
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: fp4
- bnb_4bit_use_double_quant: False
- bnb_4bit_compute_dtype: float32
### Framework versions
- PEFT 0.4.0
|
gaianet/EXAONE-Deep-2.4B-GGUF | gaianet | "2025-03-19T03:26:57Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"exaone",
"text-generation",
"lg-ai",
"custom_code",
"en",
"ko",
"base_model:LGAI-EXAONE/EXAONE-Deep-2.4B",
"base_model:quantized:LGAI-EXAONE/EXAONE-Deep-2.4B",
"license:other",
"autotrain_compatible",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2025-03-19T03:10:31Z" | ---
license: other
license_name: exaone
license_link: LICENSE
model_name: EXAONE-Deep-2.4B
base_model: LGAI-EXAONE/EXAONE-Deep-2.4B
quantized_by: Second State Inc.
language:
- en
- ko
tags:
- lg-ai
- exaone
- gguf
pipeline_tag: text-generation
library_name: transformers
---
# EXAONE-Deep-2.4B-GGUF
## Original Model
[LGAI-EXAONE/EXAONE-Deep-2.4B](https://huggingface.co/LGAI-EXAONE/EXAONE-Deep-2.4B)
## Run with Gaianet
**Prompt template**
prompt template: `exaone-deep-chat`
**Context size**
chat_ctx_size: `32000`
**Run with GaiaNet**
- Quick start: https://docs.gaianet.ai/node-guide/quick-start
- Customize your node: https://docs.gaianet.ai/node-guide/customize
*Quantized with llama.cpp b4920*
|
jainamk/cartpool | jainamk | "2024-03-13T01:44:08Z" | 0 | 0 | null | [
"CartPole-v1",
"reinforce",
"reinforcement-learning",
"custom-implementation",
"deep-rl-class",
"model-index",
"region:us"
] | reinforcement-learning | "2024-03-13T01:43:58Z" | ---
tags:
- CartPole-v1
- reinforce
- reinforcement-learning
- custom-implementation
- deep-rl-class
model-index:
- name: cartpool
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: CartPole-v1
type: CartPole-v1
metrics:
- type: mean_reward
value: 500.00 +/- 0.00
name: mean_reward
verified: false
---
# **Reinforce** Agent playing **CartPole-v1**
This is a trained model of a **Reinforce** agent playing **CartPole-v1** .
To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
|
tscstudios/iwal7zawwerd8k7vjzyubn9guup1_3727ed6a-95cb-4d68-931d-cc8bb548944f | tscstudios | "2025-03-14T11:49:23Z" | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-03-14T11:49:22Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Iwal7Zawwerd8K7Vjzyubn9Guup1_3727Ed6A 95Cb 4D68 931D Cc8Bb548944F
<Gallery />
Trained on Replicate using:
https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('tscstudios/iwal7zawwerd8k7vjzyubn9guup1_3727ed6a-95cb-4d68-931d-cc8bb548944f', weight_name='lora.safetensors')
image = pipeline('your prompt').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
|
bh8648/base_epoch4-copy2_test3 | bh8648 | "2023-12-10T09:40:10Z" | 0 | 0 | peft | [
"peft",
"region:us"
] | null | "2023-12-09T13:40:45Z" | ---
library_name: peft
---
## Training procedure
The following `bitsandbytes` quantization config was used during training:
- quant_method: bitsandbytes
- load_in_8bit: False
- load_in_4bit: True
- llm_int8_threshold: 6.0
- llm_int8_skip_modules: None
- llm_int8_enable_fp32_cpu_offload: False
- llm_int8_has_fp16_weight: False
- bnb_4bit_quant_type: nf4
- bnb_4bit_use_double_quant: True
- bnb_4bit_compute_dtype: bfloat16
### Framework versions
- PEFT 0.5.0
|
mrferr3t/8c94b76a-de1a-48ca-bf3c-dc3d6d840acb | mrferr3t | "2025-01-30T17:20:35Z" | 8 | 0 | peft | [
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:NousResearch/Llama-3.2-1B",
"base_model:adapter:NousResearch/Llama-3.2-1B",
"license:llama3.2",
"region:us"
] | null | "2025-01-30T17:19:32Z" | ---
library_name: peft
license: llama3.2
base_model: NousResearch/Llama-3.2-1B
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 8c94b76a-de1a-48ca-bf3c-dc3d6d840acb
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: NousResearch/Llama-3.2-1B
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 79385beddd9dbca7_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/79385beddd9dbca7_train_data.json
type:
field_input: attempts
field_instruction: problem
field_output: solution
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_steps: 50
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: mrferr3t/8c94b76a-de1a-48ca-bf3c-dc3d6d840acb
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.0005
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 99
micro_batch_size: 2
mlflow_experiment_name: /tmp/79385beddd9dbca7_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 300
saves_per_epoch: 0
sequence_len: 512
special_tokens:
pad_token: <|end_of_text|>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: dd2d88ea-d931-49ef-8118-f7a5dfe074f2
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: dd2d88ea-d931-49ef-8118-f7a5dfe074f2
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 8c94b76a-de1a-48ca-bf3c-dc3d6d840acb
This model is a fine-tuned version of [NousResearch/Llama-3.2-1B](https://huggingface.co/NousResearch/Llama-3.2-1B) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0370
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use adamw_bnb_8bit with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 99
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.3329 | 0.0011 | 1 | 0.3322 |
| 0.0063 | 0.0532 | 50 | 0.0370 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.3.1+cu121
- Datasets 3.0.1
- Tokenizers 0.20.1 |
TheBloke/leo-hessianai-13B-GGUF | TheBloke | "2023-09-28T13:43:20Z" | 172 | 2 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation",
"en",
"de",
"dataset:oscar-corpus/OSCAR-2301",
"dataset:wikipedia",
"dataset:bjoernp/tagesschau-2018-2023",
"base_model:LeoLM/leo-hessianai-13b",
"base_model:quantized:LeoLM/leo-hessianai-13b",
"license:llama2",
"region:us"
] | text-generation | "2023-09-28T13:36:34Z" | ---
base_model: LeoLM/leo-hessianai-13b
datasets:
- oscar-corpus/OSCAR-2301
- wikipedia
- bjoernp/tagesschau-2018-2023
inference: false
language:
- en
- de
library_name: transformers
license: llama2
model_creator: LAION LeoLM
model_name: Leo Hessianai 13B
model_type: llama
pipeline_tag: text-generation
prompt_template: '{prompt}
'
quantized_by: TheBloke
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<div style="display: flex; flex-direction: column; align-items: flex-start;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://discord.gg/theblokeai">Chat & support: TheBloke's Discord server</a></p>
</div>
<div style="display: flex; flex-direction: column; align-items: flex-end;">
<p style="margin-top: 0.5em; margin-bottom: 0em;"><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? TheBloke's Patreon page</a></p>
</div>
</div>
<div style="text-align:center; margin-top: 0em; margin-bottom: 0em"><p style="margin-top: 0.25em; margin-bottom: 0em;">TheBloke's LLM work is generously supported by a grant from <a href="https://a16z.com">andreessen horowitz (a16z)</a></p></div>
<hr style="margin-top: 1.0em; margin-bottom: 1.0em;">
<!-- header end -->
# Leo Hessianai 13B - GGUF
- Model creator: [LAION LeoLM](https://huggingface.co/LeoLM)
- Original model: [Leo Hessianai 13B](https://huggingface.co/LeoLM/leo-hessianai-13b)
<!-- description start -->
## Description
This repo contains GGUF format model files for [LAION LeoLM's Leo Hessianai 13B](https://huggingface.co/LeoLM/leo-hessianai-13b).
<!-- description end -->
<!-- README_GGUF.md-about-gguf start -->
### About GGUF
GGUF is a new format introduced by the llama.cpp team on August 21st 2023. It is a replacement for GGML, which is no longer supported by llama.cpp.
Here is an incomplate list of clients and libraries that are known to support GGUF:
* [llama.cpp](https://github.com/ggerganov/llama.cpp). The source project for GGUF. Offers a CLI and a server option.
* [text-generation-webui](https://github.com/oobabooga/text-generation-webui), the most widely used web UI, with many features and powerful extensions. Supports GPU acceleration.
* [KoboldCpp](https://github.com/LostRuins/koboldcpp), a fully featured web UI, with GPU accel across all platforms and GPU architectures. Especially good for story telling.
* [LM Studio](https://lmstudio.ai/), an easy-to-use and powerful local GUI for Windows and macOS (Silicon), with GPU acceleration.
* [LoLLMS Web UI](https://github.com/ParisNeo/lollms-webui), a great web UI with many interesting and unique features, including a full model library for easy model selection.
* [Faraday.dev](https://faraday.dev/), an attractive and easy to use character-based chat GUI for Windows and macOS (both Silicon and Intel), with GPU acceleration.
* [ctransformers](https://github.com/marella/ctransformers), a Python library with GPU accel, LangChain support, and OpenAI-compatible AI server.
* [llama-cpp-python](https://github.com/abetlen/llama-cpp-python), a Python library with GPU accel, LangChain support, and OpenAI-compatible API server.
* [candle](https://github.com/huggingface/candle), a Rust ML framework with a focus on performance, including GPU support, and ease of use.
<!-- README_GGUF.md-about-gguf end -->
<!-- repositories-available start -->
## Repositories available
* [AWQ model(s) for GPU inference.](https://huggingface.co/TheBloke/leo-hessianai-13B-AWQ)
* [GPTQ models for GPU inference, with multiple quantisation parameter options.](https://huggingface.co/TheBloke/leo-hessianai-13B-GPTQ)
* [2, 3, 4, 5, 6 and 8-bit GGUF models for CPU+GPU inference](https://huggingface.co/TheBloke/leo-hessianai-13B-GGUF)
* [LAION LeoLM's original unquantised fp16 model in pytorch format, for GPU inference and for further conversions](https://huggingface.co/LeoLM/leo-hessianai-13b)
<!-- repositories-available end -->
<!-- prompt-template start -->
## Prompt template: None
```
{prompt}
```
<!-- prompt-template end -->
<!-- compatibility_gguf start -->
## Compatibility
These quantised GGUFv2 files are compatible with llama.cpp from August 27th onwards, as of commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221)
They are also compatible with many third party UIs and libraries - please see the list at the top of this README.
## Explanation of quantisation methods
<details>
<summary>Click to see details</summary>
The new methods available are:
* GGML_TYPE_Q2_K - "type-1" 2-bit quantization in super-blocks containing 16 blocks, each block having 16 weight. Block scales and mins are quantized with 4 bits. This ends up effectively using 2.5625 bits per weight (bpw)
* GGML_TYPE_Q3_K - "type-0" 3-bit quantization in super-blocks containing 16 blocks, each block having 16 weights. Scales are quantized with 6 bits. This end up using 3.4375 bpw.
* GGML_TYPE_Q4_K - "type-1" 4-bit quantization in super-blocks containing 8 blocks, each block having 32 weights. Scales and mins are quantized with 6 bits. This ends up using 4.5 bpw.
* GGML_TYPE_Q5_K - "type-1" 5-bit quantization. Same super-block structure as GGML_TYPE_Q4_K resulting in 5.5 bpw
* GGML_TYPE_Q6_K - "type-0" 6-bit quantization. Super-blocks with 16 blocks, each block having 16 weights. Scales are quantized with 8 bits. This ends up using 6.5625 bpw
Refer to the Provided Files table below to see what files use which methods, and how.
</details>
<!-- compatibility_gguf end -->
<!-- README_GGUF.md-provided-files start -->
## Provided files
| Name | Quant method | Bits | Size | Max RAM required | Use case |
| ---- | ---- | ---- | ---- | ---- | ----- |
| [leo-hessianai-13b.Q2_K.gguf](https://huggingface.co/TheBloke/leo-hessianai-13B-GGUF/blob/main/leo-hessianai-13b.Q2_K.gguf) | Q2_K | 2 | 5.43 GB| 7.93 GB | smallest, significant quality loss - not recommended for most purposes |
| [leo-hessianai-13b.Q3_K_S.gguf](https://huggingface.co/TheBloke/leo-hessianai-13B-GGUF/blob/main/leo-hessianai-13b.Q3_K_S.gguf) | Q3_K_S | 3 | 5.66 GB| 8.16 GB | very small, high quality loss |
| [leo-hessianai-13b.Q3_K_M.gguf](https://huggingface.co/TheBloke/leo-hessianai-13B-GGUF/blob/main/leo-hessianai-13b.Q3_K_M.gguf) | Q3_K_M | 3 | 6.34 GB| 8.84 GB | very small, high quality loss |
| [leo-hessianai-13b.Q3_K_L.gguf](https://huggingface.co/TheBloke/leo-hessianai-13B-GGUF/blob/main/leo-hessianai-13b.Q3_K_L.gguf) | Q3_K_L | 3 | 6.93 GB| 9.43 GB | small, substantial quality loss |
| [leo-hessianai-13b.Q4_0.gguf](https://huggingface.co/TheBloke/leo-hessianai-13B-GGUF/blob/main/leo-hessianai-13b.Q4_0.gguf) | Q4_0 | 4 | 7.37 GB| 9.87 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
| [leo-hessianai-13b.Q4_K_S.gguf](https://huggingface.co/TheBloke/leo-hessianai-13B-GGUF/blob/main/leo-hessianai-13b.Q4_K_S.gguf) | Q4_K_S | 4 | 7.41 GB| 9.91 GB | small, greater quality loss |
| [leo-hessianai-13b.Q4_K_M.gguf](https://huggingface.co/TheBloke/leo-hessianai-13B-GGUF/blob/main/leo-hessianai-13b.Q4_K_M.gguf) | Q4_K_M | 4 | 7.87 GB| 10.37 GB | medium, balanced quality - recommended |
| [leo-hessianai-13b.Q5_0.gguf](https://huggingface.co/TheBloke/leo-hessianai-13B-GGUF/blob/main/leo-hessianai-13b.Q5_0.gguf) | Q5_0 | 5 | 8.97 GB| 11.47 GB | legacy; medium, balanced quality - prefer using Q4_K_M |
| [leo-hessianai-13b.Q5_K_S.gguf](https://huggingface.co/TheBloke/leo-hessianai-13B-GGUF/blob/main/leo-hessianai-13b.Q5_K_S.gguf) | Q5_K_S | 5 | 8.97 GB| 11.47 GB | large, low quality loss - recommended |
| [leo-hessianai-13b.Q5_K_M.gguf](https://huggingface.co/TheBloke/leo-hessianai-13B-GGUF/blob/main/leo-hessianai-13b.Q5_K_M.gguf) | Q5_K_M | 5 | 9.23 GB| 11.73 GB | large, very low quality loss - recommended |
| [leo-hessianai-13b.Q6_K.gguf](https://huggingface.co/TheBloke/leo-hessianai-13B-GGUF/blob/main/leo-hessianai-13b.Q6_K.gguf) | Q6_K | 6 | 10.68 GB| 13.18 GB | very large, extremely low quality loss |
| [leo-hessianai-13b.Q8_0.gguf](https://huggingface.co/TheBloke/leo-hessianai-13B-GGUF/blob/main/leo-hessianai-13b.Q8_0.gguf) | Q8_0 | 8 | 13.83 GB| 16.33 GB | very large, extremely low quality loss - not recommended |
**Note**: the above RAM figures assume no GPU offloading. If layers are offloaded to the GPU, this will reduce RAM usage and use VRAM instead.
<!-- README_GGUF.md-provided-files end -->
<!-- README_GGUF.md-how-to-download start -->
## How to download GGUF files
**Note for manual downloaders:** You almost never want to clone the entire repo! Multiple different quantisation formats are provided, and most users only want to pick and download a single file.
The following clients/libraries will automatically download models for you, providing a list of available models to choose from:
- LM Studio
- LoLLMS Web UI
- Faraday.dev
### In `text-generation-webui`
Under Download Model, you can enter the model repo: TheBloke/leo-hessianai-13B-GGUF and below it, a specific filename to download, such as: leo-hessianai-13b.Q4_K_M.gguf.
Then click Download.
### On the command line, including multiple files at once
I recommend using the `huggingface-hub` Python library:
```shell
pip3 install huggingface-hub
```
Then you can download any individual model file to the current directory, at high speed, with a command like this:
```shell
huggingface-cli download TheBloke/leo-hessianai-13B-GGUF leo-hessianai-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
<details>
<summary>More advanced huggingface-cli download usage</summary>
You can also download multiple files at once with a pattern:
```shell
huggingface-cli download TheBloke/leo-hessianai-13B-GGUF --local-dir . --local-dir-use-symlinks False --include='*Q4_K*gguf'
```
For more documentation on downloading with `huggingface-cli`, please see: [HF -> Hub Python Library -> Download files -> Download from the CLI](https://huggingface.co/docs/huggingface_hub/guides/download#download-from-the-cli).
To accelerate downloads on fast connections (1Gbit/s or higher), install `hf_transfer`:
```shell
pip3 install hf_transfer
```
And set environment variable `HF_HUB_ENABLE_HF_TRANSFER` to `1`:
```shell
HF_HUB_ENABLE_HF_TRANSFER=1 huggingface-cli download TheBloke/leo-hessianai-13B-GGUF leo-hessianai-13b.Q4_K_M.gguf --local-dir . --local-dir-use-symlinks False
```
Windows Command Line users: You can set the environment variable by running `set HF_HUB_ENABLE_HF_TRANSFER=1` before the download command.
</details>
<!-- README_GGUF.md-how-to-download end -->
<!-- README_GGUF.md-how-to-run start -->
## Example `llama.cpp` command
Make sure you are using `llama.cpp` from commit [d0cee0d](https://github.com/ggerganov/llama.cpp/commit/d0cee0d36d5be95a0d9088b674dbb27354107221) or later.
```shell
./main -ngl 32 -m leo-hessianai-13b.Q4_K_M.gguf --color -c 4096 --temp 0.7 --repeat_penalty 1.1 -n -1 -p "{prompt}"
```
Change `-ngl 32` to the number of layers to offload to GPU. Remove it if you don't have GPU acceleration.
Change `-c 4096` to the desired sequence length. For extended sequence models - eg 8K, 16K, 32K - the necessary RoPE scaling parameters are read from the GGUF file and set by llama.cpp automatically.
If you want to have a chat-style conversation, replace the `-p <PROMPT>` argument with `-i -ins`
For other parameters and how to use them, please refer to [the llama.cpp documentation](https://github.com/ggerganov/llama.cpp/blob/master/examples/main/README.md)
## How to run in `text-generation-webui`
Further instructions here: [text-generation-webui/docs/llama.cpp.md](https://github.com/oobabooga/text-generation-webui/blob/main/docs/llama.cpp.md).
## How to run from Python code
You can use GGUF models from Python using the [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) or [ctransformers](https://github.com/marella/ctransformers) libraries.
### How to load this model in Python code, using ctransformers
#### First install the package
Run one of the following commands, according to your system:
```shell
# Base ctransformers with no GPU acceleration
pip install ctransformers
# Or with CUDA GPU acceleration
pip install ctransformers[cuda]
# Or with AMD ROCm GPU acceleration (Linux only)
CT_HIPBLAS=1 pip install ctransformers --no-binary ctransformers
# Or with Metal GPU acceleration for macOS systems only
CT_METAL=1 pip install ctransformers --no-binary ctransformers
```
#### Simple ctransformers example code
```python
from ctransformers import AutoModelForCausalLM
# Set gpu_layers to the number of layers to offload to GPU. Set to 0 if no GPU acceleration is available on your system.
llm = AutoModelForCausalLM.from_pretrained("TheBloke/leo-hessianai-13B-GGUF", model_file="leo-hessianai-13b.Q4_K_M.gguf", model_type="llama", gpu_layers=50)
print(llm("AI is going to"))
```
## How to use with LangChain
Here are guides on using llama-cpp-python and ctransformers with LangChain:
* [LangChain + llama-cpp-python](https://python.langchain.com/docs/integrations/llms/llamacpp)
* [LangChain + ctransformers](https://python.langchain.com/docs/integrations/providers/ctransformers)
<!-- README_GGUF.md-how-to-run end -->
<!-- footer start -->
<!-- 200823 -->
## Discord
For further support, and discussions on these models and AI in general, join us at:
[TheBloke AI's Discord server](https://discord.gg/theblokeai)
## Thanks, and how to contribute
Thanks to the [chirper.ai](https://chirper.ai) team!
Thanks to Clay from [gpus.llm-utils.org](llm-utils)!
I've had a lot of people ask if they can contribute. I enjoy providing models and helping people, and would love to be able to spend even more time doing it, as well as expanding into new projects like fine tuning/training.
If you're able and willing to contribute it will be most gratefully received and will help me to keep providing more models, and to start work on new AI projects.
Donaters will get priority support on any and all AI/LLM/model questions and requests, access to a private Discord room, plus other benefits.
* Patreon: https://patreon.com/TheBlokeAI
* Ko-Fi: https://ko-fi.com/TheBlokeAI
**Special thanks to**: Aemon Algiz.
**Patreon special mentions**: Pierre Kircher, Stanislav Ovsiannikov, Michael Levine, Eugene Pentland, Andrey, 준교 김, Randy H, Fred von Graf, Artur Olbinski, Caitlyn Gatomon, terasurfer, Jeff Scroggin, James Bentley, Vadim, Gabriel Puliatti, Harry Royden McLaughlin, Sean Connelly, Dan Guido, Edmond Seymore, Alicia Loh, subjectnull, AzureBlack, Manuel Alberto Morcote, Thomas Belote, Lone Striker, Chris Smitley, Vitor Caleffi, Johann-Peter Hartmann, Clay Pascal, biorpg, Brandon Frisco, sidney chen, transmissions 11, Pedro Madruga, jinyuan sun, Ajan Kanaga, Emad Mostaque, Trenton Dambrowitz, Jonathan Leane, Iucharbius, usrbinkat, vamX, George Stoitzev, Luke Pendergrass, theTransient, Olakabola, Swaroop Kallakuri, Cap'n Zoog, Brandon Phillips, Michael Dempsey, Nikolai Manek, danny, Matthew Berman, Gabriel Tamborski, alfie_i, Raymond Fosdick, Tom X Nguyen, Raven Klaugh, LangChain4j, Magnesian, Illia Dulskyi, David Ziegler, Mano Prime, Luis Javier Navarrete Lozano, Erik Bjäreholt, 阿明, Nathan Dryer, Alex, Rainer Wilmers, zynix, TL, Joseph William Delisle, John Villwock, Nathan LeClaire, Willem Michiel, Joguhyik, GodLy, OG, Alps Aficionado, Jeffrey Morgan, ReadyPlayerEmma, Tiffany J. Kim, Sebastain Graf, Spencer Kim, Michael Davis, webtim, Talal Aujan, knownsqashed, John Detwiler, Imad Khwaja, Deo Leter, Jerry Meng, Elijah Stavena, Rooh Singh, Pieter, SuperWojo, Alexandros Triantafyllidis, Stephen Murray, Ai Maven, ya boyyy, Enrico Ros, Ken Nordquist, Deep Realms, Nicholas, Spiking Neurons AB, Elle, Will Dee, Jack West, RoA, Luke @flexchar, Viktor Bowallius, Derek Yates, Subspace Studios, jjj, Toran Billups, Asp the Wyvern, Fen Risland, Ilya, NimbleBox.ai, Chadd, Nitin Borwankar, Emre, Mandus, Leonard Tan, Kalila, K, Trailburnt, S_X, Cory Kujawski
Thank you to all my generous patrons and donaters!
And thank you again to a16z for their generous grant.
<!-- footer end -->
<!-- original-model-card start -->
# Original model card: LAION LeoLM's Leo Hessianai 13B
# LAION LeoLM: **L**inguistically **E**nhanced **O**pen **L**anguage **M**odel
Meet LeoLM, the first open and commercially available German Foundation Language Model built on Llama-2.
Our models extend Llama-2's capabilities into German through continued pretraining on a large corpus of German-language and mostly locality specific text.
Thanks to a compute grant at HessianAI's new supercomputer **42**, we release two foundation models trained with 8k context length,
[`LeoLM/leo-hessianai-7b`](https://huggingface.co/LeoLM/leo-hessianai-7b) and [`LeoLM/leo-hessianai-13b`](https://huggingface.co/LeoLM/leo-hessianai-13b) under the [Llama-2 community license](https://huggingface.co/meta-llama/Llama-2-70b/raw/main/LICENSE.txt) (70b also coming soon! 👀).
With this release, we hope to bring a new wave of opportunities to German open-source and commercial LLM research and accelerate adoption.
Read our [blog post]() or our paper (preprint coming soon) for more details!
*A project by Björn Plüster and Christoph Schuhmann in collaboration with LAION and HessianAI.*
## Model Details
- **Finetuned from:** [meta-llama/Llama-2-13b-hf](https://huggingface.co/meta-llama/Llama-2-13b-hf)
- **Model type:** Causal decoder-only transformer language model
- **Language:** English and German
- **License:** [LLAMA 2 COMMUNITY LICENSE AGREEMENT](https://huggingface.co/meta-llama/Llama-2-70b/raw/main/LICENSE.txt)
- **Contact:** [LAION Discord](https://discord.com/invite/eq3cAMZtCC) or [Björn Plüster](mailto:bjoern.pl@outlook.de)
## Use in 🤗Transformers
First install direct dependencies:
```
pip install transformers torch sentencepiece
```
If you want faster inference using flash-attention2, you need to install these dependencies:
```bash
pip install packaging ninja
pip install flash-attn==v2.1.1 --no-build-isolation
pip install git+https://github.com/HazyResearch/flash-attention.git@v2.1.1#subdirectory=csrc/rotary
```
Then load the model in transformers:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained(
model="LeoLM/leo-hessianai-13b",
device_map="auto",
torch_dtype=torch.float16,
trust_remote_code=True # True for flash-attn2 else False
)
```
## Training parameters

## Benchmarks

<!-- original-model-card end -->
|
ArribaCampeon/Phi-3.5-mini-instruct-Q4_0-GGUF | ArribaCampeon | "2024-12-18T14:45:14Z" | 9 | 0 | transformers | [
"transformers",
"gguf",
"nlp",
"code",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"multilingual",
"base_model:microsoft/Phi-3.5-mini-instruct",
"base_model:quantized:microsoft/Phi-3.5-mini-instruct",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2024-12-18T14:45:03Z" | ---
license: mit
license_link: https://huggingface.co/microsoft/Phi-3.5-mini-instruct/resolve/main/LICENSE
language:
- multilingual
pipeline_tag: text-generation
tags:
- nlp
- code
- llama-cpp
- gguf-my-repo
widget:
- messages:
- role: user
content: Can you provide ways to eat combinations of bananas and dragonfruits?
library_name: transformers
base_model: microsoft/Phi-3.5-mini-instruct
---
# ArribaCampeon/Phi-3.5-mini-instruct-Q4_0-GGUF
This model was converted to GGUF format from [`microsoft/Phi-3.5-mini-instruct`](https://huggingface.co/microsoft/Phi-3.5-mini-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/microsoft/Phi-3.5-mini-instruct) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo ArribaCampeon/Phi-3.5-mini-instruct-Q4_0-GGUF --hf-file phi-3.5-mini-instruct-q4_0.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo ArribaCampeon/Phi-3.5-mini-instruct-Q4_0-GGUF --hf-file phi-3.5-mini-instruct-q4_0.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo ArribaCampeon/Phi-3.5-mini-instruct-Q4_0-GGUF --hf-file phi-3.5-mini-instruct-q4_0.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo ArribaCampeon/Phi-3.5-mini-instruct-Q4_0-GGUF --hf-file phi-3.5-mini-instruct-q4_0.gguf -c 2048
```
|
SzegedAI/Meta-Llama-3-8B.GPTQ.Q8.WebCorpusHU_D256_S3072 | SzegedAI | "2024-05-28T03:18:00Z" | 6 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"gptq",
"region:us"
] | text-generation | "2024-05-28T03:03:22Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
tummy-tear-dark-viral-video/VIRAL.tummy-tear-dark-viral-video.Viral.Video.Full.Original.Video.Social.Media.X | tummy-tear-dark-viral-video | "2025-02-26T20:47:02Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-02-26T20:46:55Z" |
<a href="https://view-blog-777.blogspot.com/2025/02/sfdtgsdhdujftkhk.html"><img src="http://4.bp.blogspot.com/-VFcup4RzDQY/Upiobuokb5I/AAAAAAAAAV0/64yKpZilDCg/s1600/oie_nxv3mlmduAj1.gif" alt="fsd" /></a>
<a href="https://view-blog-777.blogspot.com/2025/02/sfdtgsdhdujftkhk.html" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐖𝐚𝐭𝐜𝐡 𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨)</a>
<a href="https://view-blog-777.blogspot.com/2025/02/sfdtgsdhdujftkhk.html" rel="nofollow">🔴 ➤►𝐂𝐥𝐢𝐤 𝐇𝐞𝐫𝐞 𝐭𝐨👉👉 (𝐅𝐮𝐥𝐥 𝐯𝐢𝐝𝐞𝐨 𝐋𝐢𝐧𝐤)</a>
|
pjox/camembert-classical-fr-ner | pjox | "2023-01-18T14:15:25Z" | 0 | 0 | flair | [
"flair",
"Early Modern French",
"Historical",
"NER",
"token-classification",
"fr",
"dataset:freemner",
"license:apache-2.0",
"region:us"
] | token-classification | "2023-01-18T14:11:44Z" | ---
language: fr
tags:
- Early Modern French
- Historical
- NER
- flair
license: apache-2.0
datasets:
- freemner
library_name: flair
pipeline_tag: token-classification
---
<a href="https://portizs.eu/publication/2022/lrec/dalembert/">
<img width="300px" src="https://portizs.eu/publication/2020/acl/camembert/featured_huac8a9374dbd7d6a2cb77224540858ab4_463389_720x2500_fit_q100_h2_lanczos_3.webp">
</a>
# CamemBERT Early Modern French NER model
This model is fine-tuned version of a [CamemBERT model](https://huggingface.co/camembert-base) on the [FreEMNER corpus](https://doi.org/10.5281/zenodo.6481135) for Early Modern French. It was
introduced in [this paper](https://aclanthology.org/2022.coling-1.327/).
### BibTeX entry and citation info
```bibtex
@inproceedings{ortiz-suarez-gabay-2022-data,
title = "A Data-driven Approach to Named Entity Recognition for Early {M}odern {F}rench",
author = "Ortiz Suarez, Pedro and
Gabay, Simon",
booktitle = "Proceedings of the 29th International Conference on Computational Linguistics",
month = oct,
year = "2022",
address = "Gyeongju, Republic of Korea",
publisher = "International Committee on Computational Linguistics",
url = "https://aclanthology.org/2022.coling-1.327",
pages = "3722--3730",
abstract = "Named entity recognition has become an increasingly useful tool for digital humanities research, specially when it comes to historical texts. However, historical texts pose a wide range of challenges to both named entity recognition and natural language processing in general that are still difficult to address even with modern neural methods. In this article we focus in named entity recognition for historical French, and in particular for Early Modern French (16th-18th c.), i.e. Ancien R{\'e}gime French. However, instead of developing a specialised architecture to tackle the particularities of this state of language, we opt for a data-driven approach by developing a new corpus with fine-grained entity annotation, covering three centuries of literature corresponding to the early modern period; we try to annotate as much data as possible producing a corpus that is many times bigger than the most popular NER evaluation corpora for both Contemporary English and French. We then fine-tune existing state-of-the-art architectures for Early Modern and Contemporary French, obtaining results that are on par with those of the current state-of-the-art NER systems for Contemporary English. Both the corpus and the fine-tuned models are released.",
}
``` |
LeoGenerativeCRM/model | LeoGenerativeCRM | "2024-11-20T22:47:57Z" | 6 | 0 | transformers | [
"transformers",
"gguf",
"llama",
"text-generation-inference",
"unsloth",
"en",
"base_model:unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit",
"base_model:quantized:unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-11-20T22:46:31Z" | ---
base_model: unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** LeoGenerativeCRM
- **License:** apache-2.0
- **Finetuned from model :** unsloth/Meta-Llama-3.1-8B-Instruct-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
distributed/gpt2-500m | distributed | "2024-07-16T08:30:14Z" | 164 | 0 | transformers | [
"transformers",
"safetensors",
"gpt2",
"feature-extraction",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2024-07-16T08:29:21Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
mradermacher/K2S3-Mistral-7b-v1.47-GGUF | mradermacher | "2025-01-03T23:33:40Z" | 21 | 0 | transformers | [
"transformers",
"gguf",
"en",
"base_model:Changgil/K2S3-Mistral-7b-v1.47",
"base_model:quantized:Changgil/K2S3-Mistral-7b-v1.47",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | null | "2025-01-03T23:02:16Z" | ---
base_model: Changgil/K2S3-Mistral-7b-v1.47
language:
- en
library_name: transformers
license: cc-by-nc-4.0
quantized_by: mradermacher
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/Changgil/K2S3-Mistral-7b-v1.47
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/K2S3-Mistral-7b-v1.47-GGUF/resolve/main/K2S3-Mistral-7b-v1.47.Q2_K.gguf) | Q2_K | 2.9 | |
| [GGUF](https://huggingface.co/mradermacher/K2S3-Mistral-7b-v1.47-GGUF/resolve/main/K2S3-Mistral-7b-v1.47.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/K2S3-Mistral-7b-v1.47-GGUF/resolve/main/K2S3-Mistral-7b-v1.47.Q3_K_M.gguf) | Q3_K_M | 3.7 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/K2S3-Mistral-7b-v1.47-GGUF/resolve/main/K2S3-Mistral-7b-v1.47.Q3_K_L.gguf) | Q3_K_L | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/K2S3-Mistral-7b-v1.47-GGUF/resolve/main/K2S3-Mistral-7b-v1.47.IQ4_XS.gguf) | IQ4_XS | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/K2S3-Mistral-7b-v1.47-GGUF/resolve/main/K2S3-Mistral-7b-v1.47.Q4_K_S.gguf) | Q4_K_S | 4.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/K2S3-Mistral-7b-v1.47-GGUF/resolve/main/K2S3-Mistral-7b-v1.47.Q4_K_M.gguf) | Q4_K_M | 4.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/K2S3-Mistral-7b-v1.47-GGUF/resolve/main/K2S3-Mistral-7b-v1.47.Q5_K_S.gguf) | Q5_K_S | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/K2S3-Mistral-7b-v1.47-GGUF/resolve/main/K2S3-Mistral-7b-v1.47.Q5_K_M.gguf) | Q5_K_M | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/K2S3-Mistral-7b-v1.47-GGUF/resolve/main/K2S3-Mistral-7b-v1.47.Q6_K.gguf) | Q6_K | 6.2 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/K2S3-Mistral-7b-v1.47-GGUF/resolve/main/K2S3-Mistral-7b-v1.47.Q8_0.gguf) | Q8_0 | 7.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/K2S3-Mistral-7b-v1.47-GGUF/resolve/main/K2S3-Mistral-7b-v1.47.f16.gguf) | f16 | 14.8 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
ThomasFG/100-0 | ThomasFG | "2024-02-24T12:02:08Z" | 76 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"base_model:openai/whisper-small.en",
"base_model:finetune:openai/whisper-small.en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | "2024-02-24T10:22:37Z" | ---
license: apache-2.0
base_model: openai/whisper-small.en
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: 2024-02-24_11-22-34
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 2024-02-24_11-22-34
This model is a fine-tuned version of [openai/whisper-small.en](https://huggingface.co/openai/whisper-small.en) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4083
- Wer: 15.0042
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.2041 | 1.0 | 382 | 0.4083 | 15.0042 |
### Framework versions
- Transformers 4.37.2
- Pytorch 1.13.1+cu116
- Datasets 2.17.0
- Tokenizers 0.15.2
|
Safurai/Safurai-Csharp-34B-GGUF | Safurai | "2023-11-07T10:01:25Z" | 10 | 3 | transformers | [
"transformers",
"llama",
"text-generation",
"arxiv:2311.03243",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-10-23T10:46:16Z" | ---
license: apache-2.0
pipeline_tag: text-generation
---
# 🥷 Safurai-Csharp-34B
📝 [Article](https://www.safurai.com/blog/introducing-safurai-csharp)
📄 [Paper](https://arxiv.org/abs/2311.03243)
<center><img src="https://i.imgur.com/REPqbYM.png" width="300"></center>
This is a [`codellama/CodeLlama-34b-hf`](https://huggingface.co/codellama/CodeLlama-34b-hf) model fine-tuned using QLoRA (4-bit precision) on 13B tokens of csharp evolved Q&A
We obtained <b>state-of-the-art performance</b> on the MultiPL-E code LLM benchmark for csharp, reaching 56% at pass@1 with n=5.
## 💻 Quantization
These are GGUF quantized versions of Safurai-Csharp-34B, it has been made by using the amazing [`llama.cpp`](https://github.com/ggerganov/llama.cpp) library.
## 🔧 Training
It was trained on 2 x NVIDIA A100 PCIe 80GB in 7h 40m with the following configuration file:
```yaml
base_model: codellama/CodeLlama-34b-hf
base_model_config: codellama/CodeLlama-34b-hf
model_type: LlamaForCausalLM
tokenizer_type: CodeLlamaTokenizer
is_llama_derived_model: true
hub_model_id: "Safurai/Evol-csharp-v1"
load_in_8bit: false
load_in_4bit: true
strict: false
datasets:
- path: Safurai/EvolInstruct-csharp-16k-13B-Alpaca
type: alpaca
dataset_prepared_path: last_run_prepared
val_set_size: 0.01
output_dir: ./qlora-out
sequence_len: 4096
sample_packing: true
pad_to_sequence_len: true
adapter: lora
lora_model_dir:
lora_r: 32
lora_alpha: 16
lora_dropout: 0.05
lora_target_linear: true
lora_fan_in_fan_out:
wandb_project: codellama-csharp
wandb_entity:
wandb_watch:
wandb_run_id:
wandb_log_model:
gradient_accumulation_steps: 4
micro_batch_size: 2
num_epochs: 3
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.0003
train_on_inputs: false
group_by_length: false
bf16: true
fp16: false
tf32: false
gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true
warmup_steps: 40
eval_steps: 40
save_steps:
debug:
deepspeed:
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
bos_token: "<s>"
eos_token: "</s>"
unk_token: "<unk>"
```
## 📉 Training loss curve:
<img src="https://i.imgur.com/rp1htuf.png" width="500">
## 📊 Dataset composition:
<img src="https://i.imgur.com/kTNXgGX.png" width="500">
## 💻 Usage for GGUF
``` python
# disclaimer: you have to use the llama.gguf library to run this code
import os
model_list = [file for file in os.listdir(MODEL_NAME) if GGML_VERSION in file]
prompt = input("Enter your prompt: ")
chosen_method = input("Please specify the quantization method to run the model (options: " + ", ".join(model_list) + "): ")
# Verify the chosen method is in the list
if chosen_method not in model_list:
print("Invalid method chosen!")
else:
qtype = f"{MODEL_NAME}/{MODEL_NAME.lower()}.{GGML_VERSION}.{method}.bin"
!./llama.cpp/main -m {qtype} -n 128 --color -ngl 35 -p "{prompt}"
```
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl) |
Gayathri142214002/Finetune_Pegasus_1 | Gayathri142214002 | "2023-09-08T04:39:25Z" | 5 | 0 | transformers | [
"transformers",
"pytorch",
"pegasus",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2023-07-18T05:19:37Z" | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: Finetune_Pegasus_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Finetune_Pegasus_1
This model is a fine-tuned version of [tuner007/pegasus_paraphrase](https://huggingface.co/tuner007/pegasus_paraphrase) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0942
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.7293 | 0.21 | 10 | 1.2156 |
| 1.3661 | 0.41 | 20 | 1.1203 |
| 1.3897 | 0.62 | 30 | 1.0665 |
| 1.3356 | 0.82 | 40 | 1.0304 |
| 1.171 | 1.03 | 50 | 1.0098 |
| 0.8665 | 1.23 | 60 | 1.0062 |
| 0.7864 | 1.44 | 70 | 1.0266 |
| 0.8785 | 1.64 | 80 | 1.0190 |
| 1.0596 | 1.85 | 90 | 1.0218 |
| 1.0386 | 2.05 | 100 | 1.0213 |
| 0.7452 | 2.26 | 110 | 1.0639 |
| 0.6807 | 2.46 | 120 | 1.0619 |
| 0.5764 | 2.67 | 130 | 1.0530 |
| 0.87 | 2.87 | 140 | 1.0571 |
| 0.7724 | 3.08 | 150 | 1.0563 |
| 0.5847 | 3.28 | 160 | 1.0692 |
| 0.6053 | 3.49 | 170 | 1.0652 |
| 0.6416 | 3.69 | 180 | 1.0531 |
| 0.6392 | 3.9 | 190 | 1.0416 |
| 0.6138 | 4.1 | 200 | 1.0489 |
| 0.6093 | 4.31 | 210 | 1.0668 |
| 0.5484 | 4.51 | 220 | 1.0843 |
| 0.6082 | 4.72 | 230 | 1.0771 |
| 0.56 | 4.92 | 240 | 1.0745 |
| 0.5796 | 5.13 | 250 | 1.0770 |
| 0.6597 | 5.33 | 260 | 1.0722 |
| 0.4834 | 5.54 | 270 | 1.0726 |
| 0.4232 | 5.74 | 280 | 1.0682 |
| 0.5432 | 5.95 | 290 | 1.0769 |
| 0.5944 | 6.15 | 300 | 1.0851 |
| 0.4663 | 6.36 | 310 | 1.0884 |
| 0.4568 | 6.56 | 320 | 1.0915 |
| 0.4565 | 6.77 | 330 | 1.0942 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
|
albertus-sussex/veriscrape-simcse-auto-reference_6_to_verify_4-fold-2 | albertus-sussex | "2025-03-26T12:23:32Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"roberta",
"feature-extraction",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | feature-extraction | "2025-03-26T12:22:54Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
JoannaKOKO/Qwen2VL-2b_tarot | JoannaKOKO | "2025-03-23T13:05:28Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2-VL-2B-Instruct",
"base_model:finetune:Qwen/Qwen2-VL-2B-Instruct",
"endpoints_compatible",
"region:us"
] | null | "2025-03-23T12:47:10Z" | ---
base_model: Qwen/Qwen2-VL-2B-Instruct
library_name: transformers
model_name: Qwen2VL-2b_tarot
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for Qwen2VL-2b_tarot
This model is a fine-tuned version of [Qwen/Qwen2-VL-2B-Instruct](https://huggingface.co/Qwen/Qwen2-VL-2B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="JoannaKOKO/Qwen2VL-2b_tarot", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.16.0
- Transformers: 4.49.0
- Pytorch: 2.6.0+cu124
- Datasets: 3.4.1
- Tokenizers: 0.21.1
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
HelpMumHQ/mamabot-llama-1 | HelpMumHQ | "2024-11-29T13:29:11Z" | 75 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-11-29T13:27:04Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
skarsa/annomatic_topic_subsamples_model_alpha_1_idx_2 | skarsa | "2025-02-11T13:40:07Z" | 29 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"roberta",
"text-classification",
"generated_from_trainer",
"base_model:FacebookAI/roberta-base",
"base_model:finetune:FacebookAI/roberta-base",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-01-15T16:49:51Z" | ---
library_name: transformers
license: mit
base_model: roberta-base
tags:
- generated_from_trainer
model-index:
- name: annomatic_topic_subsamples_model_alpha_1_idx_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# annomatic_topic_subsamples_model_alpha_1_idx_2
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.2.0
- Tokenizers 0.21.0
|
Sergeantzero/TF2_Dub | Sergeantzero | "2025-02-17T05:58:49Z" | 0 | 0 | null | [
"license:apache-2.0",
"region:us"
] | null | "2025-01-31T17:57:06Z" | ---
license: apache-2.0
---
|
danfeg/LaBSE_Finetuned-EN-1000 | danfeg | "2024-03-23T18:38:22Z" | 4 | 0 | sentence-transformers | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | sentence-similarity | "2024-03-23T18:36:58Z" | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# danfeg/LaBSE_Finetuned-EN-1000
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('danfeg/LaBSE_Finetuned-EN-1000')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=danfeg/LaBSE_Finetuned-EN-1000)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 32 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
(2): Dense({'in_features': 768, 'out_features': 768, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
(3): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information --> |
duyntnet/Starling-LM-7B-alpha-imatrix-GGUF | duyntnet | "2024-04-27T05:15:04Z" | 4 | 0 | transformers | [
"transformers",
"gguf",
"imatrix",
"Starling-LM-7B-alpha",
"text-generation",
"en",
"license:other",
"region:us",
"conversational"
] | text-generation | "2024-04-27T03:18:46Z" | ---
license: other
language:
- en
pipeline_tag: text-generation
inference: false
tags:
- transformers
- gguf
- imatrix
- Starling-LM-7B-alpha
---
Quantizations of https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha
# From original readme
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
**Important: Please use the exact chat template provided below for the model. Otherwise there will be a degrade in the performance. The model output can be verbose in rare cases. Please consider setting temperature = 0 to make this happen less.**
Our model follows the exact chat template and usage as [Openchat 3.5](https://huggingface.co/openchat/openchat_3.5). Please refer to their model card for more details.
In addition, our model is hosted on LMSYS [Chatbot Arena](https://chat.lmsys.org) for free test.
The conversation template is the same as Openchat 3.5:
```
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained("openchat/openchat_3.5")
# Single-turn
tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant:").input_ids
assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747]
# Multi-turn
tokens = tokenizer("GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant: Hi<|end_of_turn|>GPT4 Correct User: How are you today?<|end_of_turn|>GPT4 Correct Assistant:").input_ids
assert tokens == [1, 420, 6316, 28781, 3198, 3123, 1247, 28747, 22557, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747, 15359, 32000, 420, 6316, 28781, 3198, 3123, 1247, 28747, 1602, 460, 368, 3154, 28804, 32000, 420, 6316, 28781, 3198, 3123, 21631, 28747]
# Coding Mode
tokens = tokenizer("Code User: Implement quicksort using C++<|end_of_turn|>Code Assistant:").input_ids
assert tokens == [1, 7596, 1247, 28747, 26256, 2936, 7653, 1413, 334, 1680, 32000, 7596, 21631, 28747]
```
## Code Examples
```python
import transformers
tokenizer = transformers.AutoTokenizer.from_pretrained("berkeley-nest/Starling-LM-7B-alpha")
model = transformers.AutoModelForCausalLM.from_pretrained("berkeley-nest/Starling-LM-7B-alpha")
def generate_response(prompt):
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
outputs = model.generate(
input_ids,
max_length=256,
pad_token_id=tokenizer.pad_token_id,
eos_token_id=tokenizer.eos_token_id,
)
response_ids = outputs[0]
response_text = tokenizer.decode(response_ids, skip_special_tokens=True)
return response_text
# Single-turn conversation
prompt = "Hello, how are you?"
single_turn_prompt = f"GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant:"
response_text = generate_response(single_turn_prompt)
print("Response:", response_text)
## Multi-turn conversation
prompt = "Hello"
follow_up_question = "How are you today?"
response = ""
multi_turn_prompt = f"GPT4 Correct User: {prompt}<|end_of_turn|>GPT4 Correct Assistant: {response}<|end_of_turn|>GPT4 Correct User: {follow_up_question}<|end_of_turn|>GPT4 Correct Assistant:"
response_text = generate_response(multi_turn_prompt)
print("Multi-turn conversation response:", response_text)
### Coding conversation
prompt = "Implement quicksort using C++"
coding_prompt = f"Code User: {prompt}<|end_of_turn|>Code Assistant:"
response = generate_response(coding_prompt)
print("Coding conversation response:", response)
``` |
LHRuig/scottyjxsx | LHRuig | "2025-03-24T04:05:32Z" | 0 | 0 | diffusers | [
"diffusers",
"safetensors",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] | text-to-image | "2025-03-24T04:05:29Z" | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: suit
output:
url: images/suit.jpg
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: scottyjxsx
---
# scottyjxsx
<Gallery />
## Model description
scottyjxsx lora
## Trigger words
You should use `scottyjxsx` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/LHRuig/scottyjxsx/tree/main) them in the Files & versions tab.
|
mradermacher/deepseek-uncensored-lore-i1-GGUF | mradermacher | "2025-02-08T09:49:33Z" | 607 | 0 | transformers | [
"transformers",
"gguf",
"text-generation",
"storytelling",
"DeepSeek",
"en",
"base_model:luvGPT/deepseek-uncensored-lore",
"base_model:quantized:luvGPT/deepseek-uncensored-lore",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | text-generation | "2025-02-08T08:42:56Z" | ---
base_model: luvGPT/deepseek-uncensored-lore
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- text-generation
- storytelling
- transformers
- DeepSeek
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/luvGPT/deepseek-uncensored-lore
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/deepseek-uncensored-lore-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/deepseek-uncensored-lore-i1-GGUF/resolve/main/deepseek-uncensored-lore.i1-IQ1_S.gguf) | i1-IQ1_S | 1.8 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/deepseek-uncensored-lore-i1-GGUF/resolve/main/deepseek-uncensored-lore.i1-IQ1_M.gguf) | i1-IQ1_M | 1.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/deepseek-uncensored-lore-i1-GGUF/resolve/main/deepseek-uncensored-lore.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 2.1 | |
| [GGUF](https://huggingface.co/mradermacher/deepseek-uncensored-lore-i1-GGUF/resolve/main/deepseek-uncensored-lore.i1-IQ2_XS.gguf) | i1-IQ2_XS | 2.3 | |
| [GGUF](https://huggingface.co/mradermacher/deepseek-uncensored-lore-i1-GGUF/resolve/main/deepseek-uncensored-lore.i1-IQ2_S.gguf) | i1-IQ2_S | 2.5 | |
| [GGUF](https://huggingface.co/mradermacher/deepseek-uncensored-lore-i1-GGUF/resolve/main/deepseek-uncensored-lore.i1-Q2_K_S.gguf) | i1-Q2_K_S | 2.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/deepseek-uncensored-lore-i1-GGUF/resolve/main/deepseek-uncensored-lore.i1-IQ2_M.gguf) | i1-IQ2_M | 2.6 | |
| [GGUF](https://huggingface.co/mradermacher/deepseek-uncensored-lore-i1-GGUF/resolve/main/deepseek-uncensored-lore.i1-Q2_K.gguf) | i1-Q2_K | 2.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/deepseek-uncensored-lore-i1-GGUF/resolve/main/deepseek-uncensored-lore.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 2.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/deepseek-uncensored-lore-i1-GGUF/resolve/main/deepseek-uncensored-lore.i1-IQ3_XS.gguf) | i1-IQ3_XS | 3.1 | |
| [GGUF](https://huggingface.co/mradermacher/deepseek-uncensored-lore-i1-GGUF/resolve/main/deepseek-uncensored-lore.i1-IQ3_S.gguf) | i1-IQ3_S | 3.2 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/deepseek-uncensored-lore-i1-GGUF/resolve/main/deepseek-uncensored-lore.i1-Q3_K_S.gguf) | i1-Q3_K_S | 3.2 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/deepseek-uncensored-lore-i1-GGUF/resolve/main/deepseek-uncensored-lore.i1-IQ3_M.gguf) | i1-IQ3_M | 3.4 | |
| [GGUF](https://huggingface.co/mradermacher/deepseek-uncensored-lore-i1-GGUF/resolve/main/deepseek-uncensored-lore.i1-Q3_K_M.gguf) | i1-Q3_K_M | 3.6 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/deepseek-uncensored-lore-i1-GGUF/resolve/main/deepseek-uncensored-lore.i1-Q3_K_L.gguf) | i1-Q3_K_L | 3.8 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/deepseek-uncensored-lore-i1-GGUF/resolve/main/deepseek-uncensored-lore.i1-IQ4_XS.gguf) | i1-IQ4_XS | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/deepseek-uncensored-lore-i1-GGUF/resolve/main/deepseek-uncensored-lore.i1-IQ4_NL.gguf) | i1-IQ4_NL | 4.1 | prefer IQ4_XS |
| [GGUF](https://huggingface.co/mradermacher/deepseek-uncensored-lore-i1-GGUF/resolve/main/deepseek-uncensored-lore.i1-Q4_0.gguf) | i1-Q4_0 | 4.1 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/deepseek-uncensored-lore-i1-GGUF/resolve/main/deepseek-uncensored-lore.i1-Q4_K_S.gguf) | i1-Q4_K_S | 4.1 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/deepseek-uncensored-lore-i1-GGUF/resolve/main/deepseek-uncensored-lore.i1-Q4_K_M.gguf) | i1-Q4_K_M | 4.3 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/deepseek-uncensored-lore-i1-GGUF/resolve/main/deepseek-uncensored-lore.i1-Q4_1.gguf) | i1-Q4_1 | 4.5 | |
| [GGUF](https://huggingface.co/mradermacher/deepseek-uncensored-lore-i1-GGUF/resolve/main/deepseek-uncensored-lore.i1-Q5_K_S.gguf) | i1-Q5_K_S | 4.9 | |
| [GGUF](https://huggingface.co/mradermacher/deepseek-uncensored-lore-i1-GGUF/resolve/main/deepseek-uncensored-lore.i1-Q5_K_M.gguf) | i1-Q5_K_M | 5.0 | |
| [GGUF](https://huggingface.co/mradermacher/deepseek-uncensored-lore-i1-GGUF/resolve/main/deepseek-uncensored-lore.i1-Q6_K.gguf) | i1-Q6_K | 5.8 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
Srinivasguna/finetuning-sentiment-model-3000-samples_1 | Srinivasguna | "2025-02-20T04:52:57Z" | 0 | 0 | transformers | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-02-20T04:41:22Z" | ---
library_name: transformers
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: finetuning-sentiment-model-3000-samples_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples_1
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3631
- Accuracy: 0.8733
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Datasets 3.3.1
- Tokenizers 0.21.0
|
mradermacher/Meissa-Qwen2.5-12.3B-Instruct-i1-GGUF | mradermacher | "2024-11-11T02:46:11Z" | 23 | 1 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:win10/Meissa-Qwen2.5-12.3B-Instruct",
"base_model:quantized:win10/Meissa-Qwen2.5-12.3B-Instruct",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2024-11-11T00:51:52Z" | ---
base_model: win10/Meissa-Qwen2.5-12.3B-Instruct
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/win10/Meissa-Qwen2.5-12.3B-Instruct
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Meissa-Qwen2.5-12.3B-Instruct-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Meissa-Qwen2.5-12.3B-Instruct-i1-GGUF/resolve/main/Meissa-Qwen2.5-12.3B-Instruct.i1-IQ1_S.gguf) | i1-IQ1_S | 3.0 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Meissa-Qwen2.5-12.3B-Instruct-i1-GGUF/resolve/main/Meissa-Qwen2.5-12.3B-Instruct.i1-IQ1_M.gguf) | i1-IQ1_M | 3.2 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Meissa-Qwen2.5-12.3B-Instruct-i1-GGUF/resolve/main/Meissa-Qwen2.5-12.3B-Instruct.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 3.6 | |
| [GGUF](https://huggingface.co/mradermacher/Meissa-Qwen2.5-12.3B-Instruct-i1-GGUF/resolve/main/Meissa-Qwen2.5-12.3B-Instruct.i1-IQ2_XS.gguf) | i1-IQ2_XS | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/Meissa-Qwen2.5-12.3B-Instruct-i1-GGUF/resolve/main/Meissa-Qwen2.5-12.3B-Instruct.i1-IQ2_S.gguf) | i1-IQ2_S | 4.1 | |
| [GGUF](https://huggingface.co/mradermacher/Meissa-Qwen2.5-12.3B-Instruct-i1-GGUF/resolve/main/Meissa-Qwen2.5-12.3B-Instruct.i1-IQ2_M.gguf) | i1-IQ2_M | 4.4 | |
| [GGUF](https://huggingface.co/mradermacher/Meissa-Qwen2.5-12.3B-Instruct-i1-GGUF/resolve/main/Meissa-Qwen2.5-12.3B-Instruct.i1-Q2_K.gguf) | i1-Q2_K | 4.8 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Meissa-Qwen2.5-12.3B-Instruct-i1-GGUF/resolve/main/Meissa-Qwen2.5-12.3B-Instruct.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 5.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Meissa-Qwen2.5-12.3B-Instruct-i1-GGUF/resolve/main/Meissa-Qwen2.5-12.3B-Instruct.i1-IQ3_XS.gguf) | i1-IQ3_XS | 5.3 | |
| [GGUF](https://huggingface.co/mradermacher/Meissa-Qwen2.5-12.3B-Instruct-i1-GGUF/resolve/main/Meissa-Qwen2.5-12.3B-Instruct.i1-Q3_K_S.gguf) | i1-Q3_K_S | 5.6 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Meissa-Qwen2.5-12.3B-Instruct-i1-GGUF/resolve/main/Meissa-Qwen2.5-12.3B-Instruct.i1-IQ3_S.gguf) | i1-IQ3_S | 5.6 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Meissa-Qwen2.5-12.3B-Instruct-i1-GGUF/resolve/main/Meissa-Qwen2.5-12.3B-Instruct.i1-IQ3_M.gguf) | i1-IQ3_M | 5.7 | |
| [GGUF](https://huggingface.co/mradermacher/Meissa-Qwen2.5-12.3B-Instruct-i1-GGUF/resolve/main/Meissa-Qwen2.5-12.3B-Instruct.i1-Q3_K_M.gguf) | i1-Q3_K_M | 6.1 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Meissa-Qwen2.5-12.3B-Instruct-i1-GGUF/resolve/main/Meissa-Qwen2.5-12.3B-Instruct.i1-Q3_K_L.gguf) | i1-Q3_K_L | 6.6 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Meissa-Qwen2.5-12.3B-Instruct-i1-GGUF/resolve/main/Meissa-Qwen2.5-12.3B-Instruct.i1-IQ4_XS.gguf) | i1-IQ4_XS | 6.8 | |
| [GGUF](https://huggingface.co/mradermacher/Meissa-Qwen2.5-12.3B-Instruct-i1-GGUF/resolve/main/Meissa-Qwen2.5-12.3B-Instruct.i1-Q4_0_4_4.gguf) | i1-Q4_0_4_4 | 7.2 | fast on arm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Meissa-Qwen2.5-12.3B-Instruct-i1-GGUF/resolve/main/Meissa-Qwen2.5-12.3B-Instruct.i1-Q4_0_4_8.gguf) | i1-Q4_0_4_8 | 7.2 | fast on arm+i8mm, low quality |
| [GGUF](https://huggingface.co/mradermacher/Meissa-Qwen2.5-12.3B-Instruct-i1-GGUF/resolve/main/Meissa-Qwen2.5-12.3B-Instruct.i1-Q4_0_8_8.gguf) | i1-Q4_0_8_8 | 7.2 | fast on arm+sve, low quality |
| [GGUF](https://huggingface.co/mradermacher/Meissa-Qwen2.5-12.3B-Instruct-i1-GGUF/resolve/main/Meissa-Qwen2.5-12.3B-Instruct.i1-Q4_0.gguf) | i1-Q4_0 | 7.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Meissa-Qwen2.5-12.3B-Instruct-i1-GGUF/resolve/main/Meissa-Qwen2.5-12.3B-Instruct.i1-Q4_K_S.gguf) | i1-Q4_K_S | 7.2 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Meissa-Qwen2.5-12.3B-Instruct-i1-GGUF/resolve/main/Meissa-Qwen2.5-12.3B-Instruct.i1-Q4_K_M.gguf) | i1-Q4_K_M | 7.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Meissa-Qwen2.5-12.3B-Instruct-i1-GGUF/resolve/main/Meissa-Qwen2.5-12.3B-Instruct.i1-Q5_K_S.gguf) | i1-Q5_K_S | 8.6 | |
| [GGUF](https://huggingface.co/mradermacher/Meissa-Qwen2.5-12.3B-Instruct-i1-GGUF/resolve/main/Meissa-Qwen2.5-12.3B-Instruct.i1-Q5_K_M.gguf) | i1-Q5_K_M | 8.8 | |
| [GGUF](https://huggingface.co/mradermacher/Meissa-Qwen2.5-12.3B-Instruct-i1-GGUF/resolve/main/Meissa-Qwen2.5-12.3B-Instruct.i1-Q6_K.gguf) | i1-Q6_K | 10.2 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
staycoolish/rl_course_vizdoom_health_gathering_supreme | staycoolish | "2023-04-27T21:56:00Z" | 0 | 0 | sample-factory | [
"sample-factory",
"tensorboard",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | "2023-04-27T21:55:45Z" | ---
library_name: sample-factory
tags:
- deep-reinforcement-learning
- reinforcement-learning
- sample-factory
model-index:
- name: APPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: doom_health_gathering_supreme
type: doom_health_gathering_supreme
metrics:
- type: mean_reward
value: 9.15 +/- 4.32
name: mean_reward
verified: false
---
A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment.
This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory.
Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/
## Downloading the model
After installing Sample-Factory, download the model with:
```
python -m sample_factory.huggingface.load_from_hub -r staycoolish/rl_course_vizdoom_health_gathering_supreme
```
## Using the model
To run the model after download, use the `enjoy` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme
```
You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag.
See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details
## Training with this model
To continue training with this model, use the `train` script corresponding to this environment:
```
python -m .usr.local.lib.python3.10.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000
```
Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
|
badrmarani/cifar10lt_r34_conftrclass_tune_logits_1000_100_1_0.005 | badrmarani | "2025-04-16T02:17:44Z" | 0 | 0 | null | [
"safetensors",
"model_hub_mixin",
"pytorch_model_hub_mixin",
"region:us"
] | null | "2025-04-16T02:17:39Z" | ---
tags:
- model_hub_mixin
- pytorch_model_hub_mixin
---
This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
- Code: [More Information Needed]
- Paper: [More Information Needed]
- Docs: [More Information Needed] |
mradermacher/zephyr-7b-UC-0-GGUF | mradermacher | "2024-11-01T00:33:11Z" | 70 | 0 | transformers | [
"transformers",
"gguf",
"trl",
"dpo",
"generated_from_trainer",
"en",
"base_model:weijie210/zephyr-7b-UC-0",
"base_model:quantized:weijie210/zephyr-7b-UC-0",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2024-11-01T00:02:46Z" | ---
base_model: weijie210/zephyr-7b-UC-0
language:
- en
library_name: transformers
license: apache-2.0
quantized_by: mradermacher
tags:
- trl
- dpo
- generated_from_trainer
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
static quants of https://huggingface.co/weijie210/zephyr-7b-UC-0
<!-- provided-files -->
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-UC-0-GGUF/resolve/main/zephyr-7b-UC-0.Q2_K.gguf) | Q2_K | 2.8 | |
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-UC-0-GGUF/resolve/main/zephyr-7b-UC-0.Q3_K_S.gguf) | Q3_K_S | 3.3 | |
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-UC-0-GGUF/resolve/main/zephyr-7b-UC-0.Q3_K_M.gguf) | Q3_K_M | 3.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-UC-0-GGUF/resolve/main/zephyr-7b-UC-0.Q3_K_L.gguf) | Q3_K_L | 3.9 | |
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-UC-0-GGUF/resolve/main/zephyr-7b-UC-0.IQ4_XS.gguf) | IQ4_XS | 4.0 | |
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-UC-0-GGUF/resolve/main/zephyr-7b-UC-0.Q4_K_S.gguf) | Q4_K_S | 4.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-UC-0-GGUF/resolve/main/zephyr-7b-UC-0.Q4_K_M.gguf) | Q4_K_M | 4.5 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-UC-0-GGUF/resolve/main/zephyr-7b-UC-0.Q5_K_S.gguf) | Q5_K_S | 5.1 | |
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-UC-0-GGUF/resolve/main/zephyr-7b-UC-0.Q5_K_M.gguf) | Q5_K_M | 5.2 | |
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-UC-0-GGUF/resolve/main/zephyr-7b-UC-0.Q6_K.gguf) | Q6_K | 6.0 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/zephyr-7b-UC-0-GGUF/resolve/main/zephyr-7b-UC-0.Q8_0.gguf) | Q8_0 | 7.8 | fast, best quality |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
secmlr/SWE-BENCH-433-set-claude-related-localization-with-reasoning_continue-file-level-7b-433 | secmlr | "2025-04-15T20:14:01Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"llama-factory",
"full",
"generated_from_trainer",
"conversational",
"base_model:secmlr/SWE-BENCH-433-set-claude-file-localization-with-reasoning_qwen_code_7B_test_swe",
"base_model:finetune:secmlr/SWE-BENCH-433-set-claude-file-localization-with-reasoning_qwen_code_7B_test_swe",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-15T18:27:38Z" | ---
library_name: transformers
license: apache-2.0
base_model: secmlr/SWE-BENCH-433-set-claude-file-localization-with-reasoning_qwen_code_7B_test_swe
tags:
- llama-factory
- full
- generated_from_trainer
model-index:
- name: SWE-BENCH-433-set-claude-related-localization-with-reasoning_continue-file-level-7b-433
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SWE-BENCH-433-set-claude-related-localization-with-reasoning_continue-file-level-7b-433
This model is a fine-tuned version of [secmlr/SWE-BENCH-433-set-claude-file-localization-with-reasoning_qwen_code_7B_test_swe](https://huggingface.co/secmlr/SWE-BENCH-433-set-claude-file-localization-with-reasoning_qwen_code_7B_test_swe) on the SWE-BENCH-433-set-claude-related-localization-with-reasoning dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 12
- total_train_batch_size: 24
- total_eval_batch_size: 16
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.48.3
- Pytorch 2.5.1+cu124
- Datasets 3.2.0
- Tokenizers 0.21.0
|
ajlsax/3043e444-7e98-4648-850c-d939eefd0179 | ajlsax | "2025-04-10T08:03:11Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-04-10T07:34:28Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:website@huggingface.co">an email</a></p>
</div>
</main>
</body>
</html> |
DucQuan/qwen2.5-0.5b-finetune-custom-guff | DucQuan | "2025-03-21T04:07:58Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"qwen2",
"text-generation-inference",
"unsloth",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-03-20T11:48:21Z" | ---
base_model: unsloth/qwen2.5-0.5b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- gguf
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** DucQuan
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-0.5b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.