modelId
stringlengths 5
138
| author
stringlengths 2
42
| last_modified
unknowndate 2020-02-15 11:33:14
2025-05-08 00:40:07
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 451
values | tags
sequencelengths 1
4.05k
| pipeline_tag
stringclasses 54
values | createdAt
unknowndate 2022-03-02 23:29:04
2025-05-08 00:39:42
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
recursiveauto/pareto-lang-Interpretability-Rosetta-Stone | recursiveauto | "2025-04-07T00:43:49Z" | 0 | 0 | null | [
"interpretability",
"alignment",
"constitutional AI",
"refusal-diagnostic",
"transformer-failure-analysis",
"recursion",
"failure-as-signal",
"advanced",
"transformer",
"models",
"arxiv:2504.01234",
"region:us"
] | null | "2025-04-06T21:49:04Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:website@huggingface.co">an email</a></p>
</div>
</main>
</body>
</html> |
MinaMila/gemma2_9b_Adult_6ep_22 | MinaMila | "2025-04-07T00:43:41Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:unsloth/gemma-2-9b",
"base_model:finetune:unsloth/gemma-2-9b",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-07T00:40:03Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:website@huggingface.co">an email</a></p>
</div>
</main>
</body>
</html> |
ZachSun/qwen2.5-gfn-sft-3b-250k | ZachSun | "2025-04-07T00:42:56Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-04-07T00:42:56Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:website@huggingface.co">an email</a></p>
</div>
</main>
</body>
</html> |
abePclWaseda/asr_train_asr_conformer_lr2e-3_warmup15k_amp_nondet_raw_en_hf_openai-gpt2_sp | abePclWaseda | "2025-04-07T00:40:49Z" | 0 | 0 | espnet | [
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:librispeech_100",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | automatic-speech-recognition | "2025-04-06T05:04:10Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:website@huggingface.co">an email</a></p>
</div>
</main>
</body>
</html> |
lesso08/b65f4075-8b40-44c8-a6ea-17da40844ab2 | lesso08 | "2025-04-07T00:39:19Z" | 0 | 0 | null | [
"region:us"
] | null | "2025-04-07T00:16:39Z" | <!DOCTYPE html>
<html class="" lang="en">
<head>
<meta charset="utf-8" />
<meta
name="viewport"
content="width=device-width, initial-scale=1.0, user-scalable=no"
/>
<meta
name="description"
content="We're on a journey to advance and democratize artificial intelligence through open source and open science."
/>
<meta property="fb:app_id" content="1321688464574422" />
<meta name="twitter:card" content="summary_large_image" />
<meta name="twitter:site" content="@huggingface" />
<meta
property="og:title"
content="Hugging Face - The AI community building the future."
/>
<meta property="og:type" content="website" />
<title>Hugging Face - The AI community building the future.</title>
<style>
body {
margin: 0;
}
main {
background-color: white;
min-height: 100vh;
padding: 7rem 1rem 8rem 1rem;
text-align: center;
font-family: Source Sans Pro, ui-sans-serif, system-ui, -apple-system,
BlinkMacSystemFont, Segoe UI, Roboto, Helvetica Neue, Arial, Noto Sans,
sans-serif, Apple Color Emoji, Segoe UI Emoji, Segoe UI Symbol,
Noto Color Emoji;
}
img {
width: 6rem;
height: 6rem;
margin: 0 auto 1rem;
}
h1 {
font-size: 3.75rem;
line-height: 1;
color: rgba(31, 41, 55, 1);
font-weight: 700;
box-sizing: border-box;
margin: 0 auto;
}
p, a {
color: rgba(107, 114, 128, 1);
font-size: 1.125rem;
line-height: 1.75rem;
max-width: 28rem;
box-sizing: border-box;
margin: 0 auto;
}
.dark main {
background-color: rgb(11, 15, 25);
}
.dark h1 {
color: rgb(209, 213, 219);
}
.dark p, .dark a {
color: rgb(156, 163, 175);
}
</style>
<script>
// On page load or when changing themes, best to add inline in `head` to avoid FOUC
const key = "_tb_global_settings";
let theme = window.matchMedia("(prefers-color-scheme: dark)").matches
? "dark"
: "light";
try {
const storageTheme = JSON.parse(window.localStorage.getItem(key)).theme;
if (storageTheme) {
theme = storageTheme === "dark" ? "dark" : "light";
}
} catch (e) {}
if (theme === "dark") {
document.documentElement.classList.add("dark");
} else {
document.documentElement.classList.remove("dark");
}
</script>
</head>
<body>
<main>
<img
src="https://cdn-media.huggingface.co/assets/huggingface_logo.svg"
alt=""
/>
<div>
<h1>429</h1>
<p>We had to rate limit you. If you think it's an error, send us <a href="mailto:website@huggingface.co">an email</a></p>
</div>
</main>
</body>
</html> |
genki10/Trial3BERT_AugV8_k7_task1_organization_sp030_lw010_fold4 | genki10 | "2025-04-07T00:38:24Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-04-07T00:22:57Z" | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: Trial3BERT_AugV8_k7_task1_organization_sp030_lw010_fold4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Trial3BERT_AugV8_k7_task1_organization_sp030_lw010_fold4
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8567
- Qwk: 0.4601
- Mse: 0.8567
- Rmse: 0.9256
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| No log | 1.0 | 5 | 7.7230 | 0.0018 | 7.7230 | 2.7790 |
| No log | 2.0 | 10 | 3.7022 | 0.0040 | 3.7022 | 1.9241 |
| No log | 3.0 | 15 | 1.8738 | 0.0509 | 1.8738 | 1.3689 |
| No log | 4.0 | 20 | 1.1164 | 0.0107 | 1.1164 | 1.0566 |
| No log | 5.0 | 25 | 1.1792 | 0.0107 | 1.1792 | 1.0859 |
| No log | 6.0 | 30 | 0.8684 | 0.1514 | 0.8684 | 0.9319 |
| No log | 7.0 | 35 | 1.5485 | 0.1531 | 1.5485 | 1.2444 |
| No log | 8.0 | 40 | 0.7063 | 0.4789 | 0.7063 | 0.8404 |
| No log | 9.0 | 45 | 1.5970 | 0.1683 | 1.5970 | 1.2637 |
| No log | 10.0 | 50 | 0.6996 | 0.3603 | 0.6996 | 0.8364 |
| No log | 11.0 | 55 | 0.7871 | 0.4002 | 0.7871 | 0.8872 |
| No log | 12.0 | 60 | 0.5792 | 0.4541 | 0.5792 | 0.7611 |
| No log | 13.0 | 65 | 0.6675 | 0.5302 | 0.6675 | 0.8170 |
| No log | 14.0 | 70 | 0.5936 | 0.5025 | 0.5936 | 0.7705 |
| No log | 15.0 | 75 | 0.6064 | 0.5651 | 0.6064 | 0.7787 |
| No log | 16.0 | 80 | 0.7287 | 0.5474 | 0.7287 | 0.8537 |
| No log | 17.0 | 85 | 1.2337 | 0.3222 | 1.2337 | 1.1107 |
| No log | 18.0 | 90 | 0.6381 | 0.5695 | 0.6381 | 0.7988 |
| No log | 19.0 | 95 | 0.6365 | 0.5894 | 0.6365 | 0.7978 |
| No log | 20.0 | 100 | 1.2623 | 0.3490 | 1.2623 | 1.1235 |
| No log | 21.0 | 105 | 1.0040 | 0.4510 | 1.0040 | 1.0020 |
| No log | 22.0 | 110 | 1.1975 | 0.3715 | 1.1975 | 1.0943 |
| No log | 23.0 | 115 | 1.1188 | 0.3913 | 1.1188 | 1.0578 |
| No log | 24.0 | 120 | 0.9801 | 0.4109 | 0.9801 | 0.9900 |
| No log | 25.0 | 125 | 0.7136 | 0.5109 | 0.7136 | 0.8448 |
| No log | 26.0 | 130 | 0.9630 | 0.4389 | 0.9630 | 0.9813 |
| No log | 27.0 | 135 | 1.0872 | 0.3892 | 1.0872 | 1.0427 |
| No log | 28.0 | 140 | 0.8670 | 0.4530 | 0.8670 | 0.9311 |
| No log | 29.0 | 145 | 0.7710 | 0.4951 | 0.7710 | 0.8781 |
| No log | 30.0 | 150 | 0.7506 | 0.5060 | 0.7506 | 0.8664 |
| No log | 31.0 | 155 | 0.8108 | 0.4555 | 0.8108 | 0.9005 |
| No log | 32.0 | 160 | 1.0215 | 0.3629 | 1.0215 | 1.0107 |
| No log | 33.0 | 165 | 0.7301 | 0.5286 | 0.7301 | 0.8544 |
| No log | 34.0 | 170 | 0.8567 | 0.4601 | 0.8567 | 0.9256 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.3.1
- Tokenizers 0.21.0
|
kadirnar/Orpheus-TTS-JP | kadirnar | "2025-04-07T00:37:36Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"unsloth",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-06T12:48:53Z" | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
trashpanda-org/Qwen2.5-32B-Dark-Days-exp1 | trashpanda-org | "2025-04-07T00:37:21Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"mergekit",
"merge",
"conversational",
"arxiv:2306.01708",
"base_model:Columbidae/Qwen2.5-32B",
"base_model:merge:Columbidae/Qwen2.5-32B",
"base_model:Columbidae/Qwen2.5-32B-Instruct",
"base_model:merge:Columbidae/Qwen2.5-32B-Instruct",
"base_model:trashpanda-org/Qwen2.5-32B-Dark-Days-stage1",
"base_model:merge:trashpanda-org/Qwen2.5-32B-Dark-Days-stage1",
"base_model:trashpanda-org/Qwen2.5-32B-Dark-Days-stage2",
"base_model:merge:trashpanda-org/Qwen2.5-32B-Dark-Days-stage2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-07T00:32:49Z" | ---
base_model:
- trashpanda-org/Qwen2.5-32B-Dark-Days-stage1
- trashpanda-org/Qwen2.5-32B-Dark-Days-stage2
- Columbidae/Qwen2.5-32B
- Columbidae/Qwen2.5-32B-Instruct
library_name: transformers
tags:
- mergekit
- merge
---
# dark-days-exp-1
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [Columbidae/Qwen2.5-32B](https://huggingface.co/Columbidae/Qwen2.5-32B) as a base.
### Models Merged
The following models were included in the merge:
* [trashpanda-org/Qwen2.5-32B-Dark-Days-stage1](https://huggingface.co/trashpanda-org/Qwen2.5-32B-Dark-Days-stage1)
* [trashpanda-org/Qwen2.5-32B-Dark-Days-stage2](https://huggingface.co/trashpanda-org/Qwen2.5-32B-Dark-Days-stage2)
* [Columbidae/Qwen2.5-32B-Instruct](https://huggingface.co/Columbidae/Qwen2.5-32B-Instruct)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: trashpanda-org/Qwen2.5-32B-Dark-Days-stage1
parameters:
weight: 1
density: 1
- model: trashpanda-org/Qwen2.5-32B-Dark-Days-stage2
parameters:
weight: 1
density: 1
- model: Columbidae/Qwen2.5-32B-Instruct
parameters:
weight: 0.9
density: 0.9
merge_method: ties
base_model: Columbidae/Qwen2.5-32B
parameters:
weight: 0.9
density: 0.9
normalize: true
int8_mask: true
tokenizer_source: Columbidae/Qwen2.5-32B-Instruct
dtype: bfloat16
```
|
Brianpuz/Qwen2.5-0.5B-Instruct-Q4_K_M-GGUF | Brianpuz | "2025-04-07T00:34:31Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"chat",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:quantized:Qwen/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2025-04-07T00:34:26Z" | ---
base_model: Qwen/Qwen2.5-0.5B-Instruct
language:
- en
library_name: transformers
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct/blob/main/LICENSE
pipeline_tag: text-generation
tags:
- chat
- llama-cpp
- gguf-my-repo
---
# Brianpuz/Qwen2.5-0.5B-Instruct-Q4_K_M-GGUF
Absolutely tremendous! This repo features **GGUF quantized** versions of [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct) — made possible using the *very powerful* `llama.cpp`. Believe me, it's fast, it's smart, it's winning.
## Quantized Versions:
Only the best quantization. You’ll love it.
## Run with llama.cpp
Just plug it in, hit the command line, and boom — you're running world-class AI, folks:
```bash
llama-cli --hf-repo Brianpuz/Qwen2.5-0.5B-Instruct-Q4_K_M-GGUF --hf-file qwen2.5-0.5b-instruct-q4_k_m.gguf -p "AI First, but also..."
```
This beautiful Hugging Face Space was brought to you by the **amazing team at [Antigma Labs](https://antigma.ai)**. Great people. Big vision. Doing things that matter — and doing them right.
Total winners.
|
AlexanderLab/wmnk | AlexanderLab | "2025-04-07T00:34:20Z" | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-04-06T23:32:27Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: WMNK
---
# Wmnk
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `WMNK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "WMNK",
"lora_weights": "https://huggingface.co/AlexanderLab/wmnk/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('AlexanderLab/wmnk', weight_name='lora.safetensors')
image = pipeline('WMNK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2800
- Learning rate: 0.0004
- LoRA rank: 32
## Contribute your own examples
You can use the [community tab](https://huggingface.co/AlexanderLab/wmnk/discussions) to add images that show off what you’ve made with this LoRA.
|
gaunernst/gemma-3-12b-it-qat-autoawq | gaunernst | "2025-04-07T00:33:13Z" | 0 | 0 | null | [
"safetensors",
"gemma3",
"gemma",
"image-text-to-text",
"conversational",
"arxiv:1905.07830",
"arxiv:1905.10044",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1705.03551",
"arxiv:1911.01547",
"arxiv:1907.10641",
"arxiv:1903.00161",
"arxiv:2009.03300",
"arxiv:2304.06364",
"arxiv:2103.03874",
"arxiv:2110.14168",
"arxiv:2311.12022",
"arxiv:2108.07732",
"arxiv:2107.03374",
"arxiv:2210.03057",
"arxiv:2106.03193",
"arxiv:1910.11856",
"arxiv:2502.12404",
"arxiv:2502.21228",
"arxiv:2404.16816",
"arxiv:2104.12756",
"arxiv:2311.16502",
"arxiv:2203.10244",
"arxiv:2404.12390",
"arxiv:1810.12440",
"arxiv:1908.02660",
"arxiv:2312.11805",
"base_model:google/gemma-3-12b-it",
"base_model:quantized:google/gemma-3-12b-it",
"license:gemma",
"4-bit",
"awq",
"region:us"
] | image-text-to-text | "2025-04-06T16:28:57Z" | ---
license: gemma
pipeline_tag: image-text-to-text
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: >-
To access Gemma on Hugging Face, you’re required to review and agree to
Google’s usage license. To do this, please ensure you’re logged in to Hugging
Face and click below. Requests are processed immediately.
extra_gated_button_content: Acknowledge license
base_model: google/gemma-3-12b-it
tags:
- gemma
- gemma3
---
# Gemma 3 12B Instruction-tuned QAT AutoAWQ
This checkpoint was converted from https://huggingface.co/google/gemma-3-12b-it-qat-q4_0-gguf to AutoAWQ format and BF16 dtype (hence, not lossess). The vision tower was transplanted from https://huggingface.co/google/gemma-3-12b-it.
Below is the original model card.
# Gemma 3 model card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs/core)
> [!Note]
> This repository corresponds to the 12B **instruction-tuned** version of the Gemma 3 model in GGUF format using Quantization Aware Training (QAT).
> The GGUF corresponds to Q4_0 quantization.
>
> Thanks to QAT, the model is able to preserve similar quality as `bfloat16` while significantly reducing the memory requirements
> to load the model.
>
> You can find the half-precision version [here](https://huggingface.co/google/gemma-3-12b-it).
**Resources and Technical Documentation**:
* [Gemma 3 Technical Report][g3-tech-report]
* [Responsible Generative AI Toolkit][rai-toolkit]
* [Gemma on Kaggle][kaggle-gemma]
* [Gemma on Vertex Model Garden][vertex-mg-gemma3]
**Terms of Use**: [Terms][terms]
**Authors**: Google DeepMind
## Model Information
Summary description and brief definition of inputs and outputs.
### Description
Gemma is a family of lightweight, state-of-the-art open models from Google,
built from the same research and technology used to create the Gemini models.
Gemma 3 models are multimodal, handling text and image input and generating text
output, with open weights for both pre-trained variants and instruction-tuned
variants. Gemma 3 has a large, 128K context window, multilingual support in over
140 languages, and is available in more sizes than previous versions. Gemma 3
models are well-suited for a variety of text generation and image understanding
tasks, including question answering, summarization, and reasoning. Their
relatively small size makes it possible to deploy them in environments with
limited resources such as laptops, desktops or your own cloud infrastructure,
democratizing access to state of the art AI models and helping foster innovation
for everyone.
### Inputs and outputs
- **Input:**
- Text string, such as a question, a prompt, or a document to be summarized
- Images, normalized to 896 x 896 resolution and encoded to 256 tokens
each
- Total input context of 128K tokens for the 4B, 12B, and 27B sizes, and
32K tokens for the 1B size
- **Output:**
- Generated text in response to the input, such as an answer to a
question, analysis of image content, or a summary of a document
- Total output context of 8192 tokens
### Usage
Below, there are some code snippets on how to get quickly started with running the model.
**llama.cpp (text-only)**
```sh
./llama-cli -hf google/gemma-3-12b-it-qat-q4_0-gguf -p "Write a poem about the Kraken."
```
**llama.cpp (image input)**
```sh
wget https://github.com/bebechien/gemma/blob/main/surprise.png?raw=true -O ~/Downloads/surprise.png
./llama-gemma3-cli -hf google/gemma-3-12b-it-qat-q4_0-gguf -p "Describe this image." --image ~/Downloads/surprise.png
```
**ollama (text only)**
Using GGUFs with Ollama via Hugging Face does not support image inputs at the moment. Please check the [docs on running gated repositories](https://huggingface.co/docs/hub/en/ollama#run-private-ggufs-from-the-hugging-face-hub).
```sh
ollama run hf.co/google/gemma-3-12b-it-qat-q4_0-gguf
```
### Citation
```none
@article{gemma_2025,
title={Gemma 3},
url={https://goo.gle/Gemma3Report},
publisher={Kaggle},
author={Gemma Team},
year={2025}
}
```
## Model Data
Data used for model training and how the data was processed.
### Training Dataset
These models were trained on a dataset of text data that includes a wide variety
of sources. The 27B model was trained with 14 trillion tokens, the 12B model was
trained with 12 trillion tokens, 4B model was trained with 4 trillion tokens and
1B with 2 trillion tokens. Here are the key components:
- Web Documents: A diverse collection of web text ensures the model is
exposed to a broad range of linguistic styles, topics, and vocabulary. The
training dataset includes content in over 140 languages.
- Code: Exposing the model to code helps it to learn the syntax and
patterns of programming languages, which improves its ability to generate
code and understand code-related questions.
- Mathematics: Training on mathematical text helps the model learn logical
reasoning, symbolic representation, and to address mathematical queries.
- Images: A wide range of images enables the model to perform image
analysis and visual data extraction tasks.
The combination of these diverse data sources is crucial for training a powerful
multimodal model that can handle a wide variety of different tasks and data
formats.
### Data Preprocessing
Here are the key data cleaning and filtering methods applied to the training
data:
- CSAM Filtering: Rigorous CSAM (Child Sexual Abuse Material) filtering
was applied at multiple stages in the data preparation process to ensure
the exclusion of harmful and illegal content.
- Sensitive Data Filtering: As part of making Gemma pre-trained models
safe and reliable, automated techniques were used to filter out certain
personal information and other sensitive data from training sets.
- Additional methods: Filtering based on content quality and safety in
line with [our policies][safety-policies].
## Implementation Information
Details about the model internals.
### Hardware
Gemma was trained using [Tensor Processing Unit (TPU)][tpu] hardware (TPUv4p,
TPUv5p and TPUv5e). Training vision-language models (VLMS) requires significant
computational power. TPUs, designed specifically for matrix operations common in
machine learning, offer several advantages in this domain:
- Performance: TPUs are specifically designed to handle the massive
computations involved in training VLMs. They can speed up training
considerably compared to CPUs.
- Memory: TPUs often come with large amounts of high-bandwidth memory,
allowing for the handling of large models and batch sizes during training.
This can lead to better model quality.
- Scalability: TPU Pods (large clusters of TPUs) provide a scalable
solution for handling the growing complexity of large foundation models.
You can distribute training across multiple TPU devices for faster and more
efficient processing.
- Cost-effectiveness: In many scenarios, TPUs can provide a more
cost-effective solution for training large models compared to CPU-based
infrastructure, especially when considering the time and resources saved
due to faster training.
- These advantages are aligned with
[Google's commitments to operate sustainably][sustainability].
### Software
Training was done using [JAX][jax] and [ML Pathways][ml-pathways].
JAX allows researchers to take advantage of the latest generation of hardware,
including TPUs, for faster and more efficient training of large models. ML
Pathways is Google's latest effort to build artificially intelligent systems
capable of generalizing across multiple tasks. This is specially suitable for
foundation models, including large language models like these ones.
Together, JAX and ML Pathways are used as described in the
[paper about the Gemini family of models][gemini-2-paper]; *"the 'single
controller' programming model of Jax and Pathways allows a single Python
process to orchestrate the entire training run, dramatically simplifying the
development workflow."*
## Evaluation
Model evaluation metrics and results.
### Benchmark Results
These models were evaluated against a large collection of different datasets and
metrics to cover different aspects of text generation:
#### Reasoning and factuality
| Benchmark | Metric | Gemma 3 PT 1B | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
| ------------------------------ |----------------|:--------------:|:-------------:|:--------------:|:--------------:|
| [HellaSwag][hellaswag] | 10-shot | 62.3 | 77.2 | 84.2 | 85.6 |
| [BoolQ][boolq] | 0-shot | 63.2 | 72.3 | 78.8 | 82.4 |
| [PIQA][piqa] | 0-shot | 73.8 | 79.6 | 81.8 | 83.3 |
| [SocialIQA][socialiqa] | 0-shot | 48.9 | 51.9 | 53.4 | 54.9 |
| [TriviaQA][triviaqa] | 5-shot | 39.8 | 65.8 | 78.2 | 85.5 |
| [Natural Questions][naturalq] | 5-shot | 9.48 | 20.0 | 31.4 | 36.1 |
| [ARC-c][arc] | 25-shot | 38.4 | 56.2 | 68.9 | 70.6 |
| [ARC-e][arc] | 0-shot | 73.0 | 82.4 | 88.3 | 89.0 |
| [WinoGrande][winogrande] | 5-shot | 58.2 | 64.7 | 74.3 | 78.8 |
| [BIG-Bench Hard][bbh] | few-shot | 28.4 | 50.9 | 72.6 | 77.7 |
| [DROP][drop] | 1-shot | 42.4 | 60.1 | 72.2 | 77.2 |
[hellaswag]: https://arxiv.org/abs/1905.07830
[boolq]: https://arxiv.org/abs/1905.10044
[piqa]: https://arxiv.org/abs/1911.11641
[socialiqa]: https://arxiv.org/abs/1904.09728
[triviaqa]: https://arxiv.org/abs/1705.03551
[naturalq]: https://github.com/google-research-datasets/natural-questions
[arc]: https://arxiv.org/abs/1911.01547
[winogrande]: https://arxiv.org/abs/1907.10641
[bbh]: https://paperswithcode.com/dataset/bbh
[drop]: https://arxiv.org/abs/1903.00161
#### STEM and code
| Benchmark | Metric | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
| ------------------------------ |----------------|:-------------:|:--------------:|:--------------:|
| [MMLU][mmlu] | 5-shot | 59.6 | 74.5 | 78.6 |
| [MMLU][mmlu] (Pro COT) | 5-shot | 29.2 | 45.3 | 52.2 |
| [AGIEval][agieval] | 3-5-shot | 42.1 | 57.4 | 66.2 |
| [MATH][math] | 4-shot | 24.2 | 43.3 | 50.0 |
| [GSM8K][gsm8k] | 8-shot | 38.4 | 71.0 | 82.6 |
| [GPQA][gpqa] | 5-shot | 15.0 | 25.4 | 24.3 |
| [MBPP][mbpp] | 3-shot | 46.0 | 60.4 | 65.6 |
| [HumanEval][humaneval] | 0-shot | 36.0 | 45.7 | 48.8 |
[mmlu]: https://arxiv.org/abs/2009.03300
[agieval]: https://arxiv.org/abs/2304.06364
[math]: https://arxiv.org/abs/2103.03874
[gsm8k]: https://arxiv.org/abs/2110.14168
[gpqa]: https://arxiv.org/abs/2311.12022
[mbpp]: https://arxiv.org/abs/2108.07732
[humaneval]: https://arxiv.org/abs/2107.03374
#### Multilingual
| Benchmark | Gemma 3 PT 1B | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
| ------------------------------------ |:-------------:|:-------------:|:--------------:|:--------------:|
| [MGSM][mgsm] | 2.04 | 34.7 | 64.3 | 74.3 |
| [Global-MMLU-Lite][global-mmlu-lite] | 24.9 | 57.0 | 69.4 | 75.7 |
| [WMT24++][wmt24pp] (ChrF) | 36.7 | 48.4 | 53.9 | 55.7 |
| [FloRes][flores] | 29.5 | 39.2 | 46.0 | 48.8 |
| [XQuAD][xquad] (all) | 43.9 | 68.0 | 74.5 | 76.8 |
| [ECLeKTic][eclektic] | 4.69 | 11.0 | 17.2 | 24.4 |
| [IndicGenBench][indicgenbench] | 41.4 | 57.2 | 61.7 | 63.4 |
[mgsm]: https://arxiv.org/abs/2210.03057
[flores]: https://arxiv.org/abs/2106.03193
[xquad]: https://arxiv.org/abs/1910.11856v3
[global-mmlu-lite]: https://huggingface.co/datasets/CohereForAI/Global-MMLU-Lite
[wmt24pp]: https://arxiv.org/abs/2502.12404v1
[eclektic]: https://arxiv.org/abs/2502.21228
[indicgenbench]: https://arxiv.org/abs/2404.16816
#### Multimodal
| Benchmark | Gemma 3 PT 4B | Gemma 3 PT 12B | Gemma 3 PT 27B |
| ------------------------------ |:-------------:|:--------------:|:--------------:|
| [COCOcap][coco-cap] | 102 | 111 | 116 |
| [DocVQA][docvqa] (val) | 72.8 | 82.3 | 85.6 |
| [InfoVQA][info-vqa] (val) | 44.1 | 54.8 | 59.4 |
| [MMMU][mmmu] (pt) | 39.2 | 50.3 | 56.1 |
| [TextVQA][textvqa] (val) | 58.9 | 66.5 | 68.6 |
| [RealWorldQA][realworldqa] | 45.5 | 52.2 | 53.9 |
| [ReMI][remi] | 27.3 | 38.5 | 44.8 |
| [AI2D][ai2d] | 63.2 | 75.2 | 79.0 |
| [ChartQA][chartqa] | 63.6 | 74.7 | 76.3 |
| [VQAv2][vqav2] | 63.9 | 71.2 | 72.9 |
| [BLINK][blinkvqa] | 38.0 | 35.9 | 39.6 |
| [OKVQA][okvqa] | 51.0 | 58.7 | 60.2 |
| [TallyQA][tallyqa] | 42.5 | 51.8 | 54.3 |
| [SpatialSense VQA][ss-vqa] | 50.9 | 60.0 | 59.4 |
| [CountBenchQA][countbenchqa] | 26.1 | 17.8 | 68.0 |
[coco-cap]: https://cocodataset.org/#home
[docvqa]: https://www.docvqa.org/
[info-vqa]: https://arxiv.org/abs/2104.12756
[mmmu]: https://arxiv.org/abs/2311.16502
[textvqa]: https://textvqa.org/
[realworldqa]: https://paperswithcode.com/dataset/realworldqa
[remi]: https://arxiv.org/html/2406.09175v1
[ai2d]: https://allenai.org/data/diagrams
[chartqa]: https://arxiv.org/abs/2203.10244
[vqav2]: https://visualqa.org/index.html
[blinkvqa]: https://arxiv.org/abs/2404.12390
[okvqa]: https://okvqa.allenai.org/
[tallyqa]: https://arxiv.org/abs/1810.12440
[ss-vqa]: https://arxiv.org/abs/1908.02660
[countbenchqa]: https://github.com/google-research/big_vision/blob/main/big_vision/datasets/countbenchqa/
## Ethics and Safety
Ethics and safety evaluation approach and results.
### Evaluation Approach
Our evaluation methods include structured evaluations and internal red-teaming
testing of relevant content policies. Red-teaming was conducted by a number of
different teams, each with different goals and human evaluation metrics. These
models were evaluated against a number of different categories relevant to
ethics and safety, including:
- **Child Safety**: Evaluation of text-to-text and image to text prompts
covering child safety policies, including child sexual abuse and
exploitation.
- **Content Safety:** Evaluation of text-to-text and image to text prompts
covering safety policies including, harassment, violence and gore, and hate
speech.
- **Representational Harms**: Evaluation of text-to-text and image to text
prompts covering safety policies including bias, stereotyping, and harmful
associations or inaccuracies.
In addition to development level evaluations, we conduct "assurance
evaluations" which are our 'arms-length' internal evaluations for responsibility
governance decision making. They are conducted separately from the model
development team, to inform decision making about release. High level findings
are fed back to the model team, but prompt sets are held-out to prevent
overfitting and preserve the results' ability to inform decision making.
Assurance evaluation results are reported to our Responsibility & Safety Council
as part of release review.
### Evaluation Results
For all areas of safety testing, we saw major improvements in the categories of
child safety, content safety, and representational harms relative to previous
Gemma models. All testing was conducted without safety filters to evaluate the
model capabilities and behaviors. For both text-to-text and image-to-text, and
across all model sizes, the model produced minimal policy violations, and showed
significant improvements over previous Gemma models' performance with respect
to ungrounded inferences. A limitation of our evaluations was they included only
English language prompts.
## Usage and Limitations
These models have certain limitations that users should be aware of.
### Intended Usage
Open vision-language models (VLMs) models have a wide range of applications
across various industries and domains. The following list of potential uses is
not comprehensive. The purpose of this list is to provide contextual information
about the possible use-cases that the model creators considered as part of model
training and development.
- Content Creation and Communication
- Text Generation: These models can be used to generate creative text
formats such as poems, scripts, code, marketing copy, and email drafts.
- Chatbots and Conversational AI: Power conversational interfaces
for customer service, virtual assistants, or interactive applications.
- Text Summarization: Generate concise summaries of a text corpus,
research papers, or reports.
- Image Data Extraction: These models can be used to extract,
interpret, and summarize visual data for text communications.
- Research and Education
- Natural Language Processing (NLP) and VLM Research: These
models can serve as a foundation for researchers to experiment with VLM
and NLP techniques, develop algorithms, and contribute to the
advancement of the field.
- Language Learning Tools: Support interactive language learning
experiences, aiding in grammar correction or providing writing practice.
- Knowledge Exploration: Assist researchers in exploring large
bodies of text by generating summaries or answering questions about
specific topics.
### Limitations
- Training Data
- The quality and diversity of the training data significantly
influence the model's capabilities. Biases or gaps in the training data
can lead to limitations in the model's responses.
- The scope of the training dataset determines the subject areas
the model can handle effectively.
- Context and Task Complexity
- Models are better at tasks that can be framed with clear
prompts and instructions. Open-ended or highly complex tasks might be
challenging.
- A model's performance can be influenced by the amount of context
provided (longer context generally leads to better outputs, up to a
certain point).
- Language Ambiguity and Nuance
- Natural language is inherently complex. Models might struggle
to grasp subtle nuances, sarcasm, or figurative language.
- Factual Accuracy
- Models generate responses based on information they learned
from their training datasets, but they are not knowledge bases. They
may generate incorrect or outdated factual statements.
- Common Sense
- Models rely on statistical patterns in language. They might
lack the ability to apply common sense reasoning in certain situations.
### Ethical Considerations and Risks
The development of vision-language models (VLMs) raises several ethical
concerns. In creating an open model, we have carefully considered the following:
- Bias and Fairness
- VLMs trained on large-scale, real-world text and image data can
reflect socio-cultural biases embedded in the training material. These
models underwent careful scrutiny, input data pre-processing described
and posterior evaluations reported in this card.
- Misinformation and Misuse
- VLMs can be misused to generate text that is false, misleading,
or harmful.
- Guidelines are provided for responsible use with the model, see the
[Responsible Generative AI Toolkit][rai-toolkit].
- Transparency and Accountability:
- This model card summarizes details on the models' architecture,
capabilities, limitations, and evaluation processes.
- A responsibly developed open model offers the opportunity to
share innovation by making VLM technology accessible to developers and
researchers across the AI ecosystem.
Risks identified and mitigations:
- **Perpetuation of biases**: It's encouraged to perform continuous
monitoring (using evaluation metrics, human review) and the exploration of
de-biasing techniques during model training, fine-tuning, and other use
cases.
- **Generation of harmful content**: Mechanisms and guidelines for content
safety are essential. Developers are encouraged to exercise caution and
implement appropriate content safety safeguards based on their specific
product policies and application use cases.
- **Misuse for malicious purposes**: Technical limitations and developer
and end-user education can help mitigate against malicious applications of
VLMs. Educational resources and reporting mechanisms for users to flag
misuse are provided. Prohibited uses of Gemma models are outlined in the
[Gemma Prohibited Use Policy][prohibited-use].
- **Privacy violations**: Models were trained on data filtered for removal
of certain personal information and other sensitive data. Developers are
encouraged to adhere to privacy regulations with privacy-preserving
techniques.
### Benefits
At the time of release, this family of models provides high-performance open
vision-language model implementations designed from the ground up for
responsible AI development compared to similarly sized models.
Using the benchmark evaluation metrics described in this document, these models
have shown to provide superior performance to other, comparably-sized open model
alternatives.
[g3-tech-report]: https://goo.gle/Gemma3Report
[rai-toolkit]: https://ai.google.dev/responsible
[kaggle-gemma]: https://www.kaggle.com/models/google/gemma-3
[vertex-mg-gemma3]: https://console.cloud.google.com/vertex-ai/publishers/google/model-garden/gemma3
[terms]: https://ai.google.dev/gemma/terms
[safety-policies]: https://ai.google/static/documents/ai-responsibility-update-published-february-2025.pdf
[prohibited-use]: https://ai.google.dev/gemma/prohibited_use_policy
[tpu]: https://cloud.google.com/tpu/docs/intro-to-tpu
[sustainability]: https://sustainability.google/operating-sustainably/
[jax]: https://github.com/jax-ml/jax
[ml-pathways]: https://blog.google/technology/ai/introducing-pathways-next-generation-ai-architecture/
[sustainability]: https://sustainability.google/operating-sustainably/
[gemini-2-paper]: https://arxiv.org/abs/2312.11805 |
xmarata/llama_level_generator_v1 | xmarata | "2025-04-07T00:31:42Z" | 0 | 0 | transformers | [
"transformers",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-04-07T00:31:38Z" | ---
base_model: unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** xmarata
- **License:** apache-2.0
- **Finetuned from model :** unsloth/meta-llama-3.1-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
hollywoodfrancis/simpleLLM | hollywoodfrancis | "2025-04-07T00:30:53Z" | 0 | 0 | null | [
"finance",
"coding",
"text-generation",
"dataset:deepmind/code_contests",
"dataset:Muennighoff/natural-instructions",
"dataset:bigcode/the-stack-v2",
"dataset:Shuu12121/python-codesearch-dataset-open",
"dataset:Jobey1/Collection_Crypto_financial_trading_reasearch",
"dataset:benstaf/nasdaq_news_sentiment",
"base_model:meta-llama/Llama-4-Scout-17B-16E-Instruct",
"base_model:finetune:meta-llama/Llama-4-Scout-17B-16E-Instruct",
"license:other",
"region:us"
] | text-generation | "2025-04-07T00:08:58Z" | ---
license: other
license_name: hollywood-francis100
license_link: LICENSE
datasets:
- deepmind/code_contests
- Muennighoff/natural-instructions
- bigcode/the-stack-v2
- Shuu12121/python-codesearch-dataset-open
- Jobey1/Collection_Crypto_financial_trading_reasearch
- benstaf/nasdaq_news_sentiment
pipeline_tag: text-generation
tags:
- finance
- coding
base_model:
- meta-llama/Llama-4-Scout-17B-16E-Instruct
--- |
y0usly/MisfitCarti_250_Epochs | y0usly | "2025-04-07T00:30:45Z" | 0 | 0 | null | [
"license:other",
"region:us"
] | null | "2024-05-05T17:32:23Z" | ---
license: other
license_name: carti
license_link: LICENSE
---
|
jrolf/finetuned-tinyllama | jrolf | "2025-04-07T00:29:24Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-04-07T00:29:22Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lilian5657/mi-modelo-checkpoint | lilian5657 | "2025-04-07T00:27:53Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"base_model:adapter:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"region:us"
] | null | "2025-04-07T00:25:31Z" | ---
base_model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0 |
RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2-gguf | RichardErkhov | "2025-04-07T00:27:44Z" | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-04-06T21:12:26Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2 - GGUF
- Model creator: https://huggingface.co/AndreyRzhaksinskiy/
- Original model: https://huggingface.co/AndreyRzhaksinskiy/CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2.Q2_K.gguf](https://huggingface.co/RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2-gguf/blob/main/CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2.Q2_K.gguf) | Q2_K | 2.36GB |
| [CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2-gguf/blob/main/CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2.IQ3_XS.gguf) | IQ3_XS | 2.6GB |
| [CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2.IQ3_S.gguf](https://huggingface.co/RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2-gguf/blob/main/CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2.IQ3_S.gguf) | IQ3_S | 2.75GB |
| [CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2-gguf/blob/main/CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2.Q3_K_S.gguf) | Q3_K_S | 2.75GB |
| [CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2.IQ3_M.gguf](https://huggingface.co/RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2-gguf/blob/main/CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2.IQ3_M.gguf) | IQ3_M | 2.9GB |
| [CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2.Q3_K.gguf](https://huggingface.co/RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2-gguf/blob/main/CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2.Q3_K.gguf) | Q3_K | 3.07GB |
| [CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2-gguf/blob/main/CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2.Q3_K_M.gguf) | Q3_K_M | 3.07GB |
| [CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2-gguf/blob/main/CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2.Q3_K_L.gguf) | Q3_K_L | 3.35GB |
| [CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2-gguf/blob/main/CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2.IQ4_XS.gguf) | IQ4_XS | 3.4GB |
| [CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2.Q4_0.gguf](https://huggingface.co/RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2-gguf/blob/main/CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2.Q4_0.gguf) | Q4_0 | 3.56GB |
| [CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2-gguf/blob/main/CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2.IQ4_NL.gguf) | IQ4_NL | 3.58GB |
| [CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2-gguf/blob/main/CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2.Q4_K_S.gguf) | Q4_K_S | 3.59GB |
| [CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2.Q4_K.gguf](https://huggingface.co/RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2-gguf/blob/main/CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2.Q4_K.gguf) | Q4_K | 3.8GB |
| [CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2-gguf/blob/main/CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2.Q4_K_M.gguf) | Q4_K_M | 3.8GB |
| [CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2.Q4_1.gguf](https://huggingface.co/RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2-gguf/blob/main/CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2.Q4_1.gguf) | Q4_1 | 3.95GB |
| [CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2.Q5_0.gguf](https://huggingface.co/RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2-gguf/blob/main/CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2.Q5_0.gguf) | Q5_0 | 4.33GB |
| [CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2-gguf/blob/main/CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2.Q5_K_S.gguf) | Q5_K_S | 4.33GB |
| [CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2.Q5_K.gguf](https://huggingface.co/RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2-gguf/blob/main/CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2.Q5_K.gguf) | Q5_K | 4.45GB |
| [CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2-gguf/blob/main/CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2.Q5_K_M.gguf) | Q5_K_M | 4.45GB |
| [CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2.Q5_1.gguf](https://huggingface.co/RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2-gguf/blob/main/CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2.Q5_1.gguf) | Q5_1 | 4.72GB |
| [CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2.Q6_K.gguf](https://huggingface.co/RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2-gguf/blob/main/CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2.Q6_K.gguf) | Q6_K | 5.15GB |
| [CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2.Q8_0.gguf](https://huggingface.co/RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2-gguf/blob/main/CDS-CL-7b-Instruct-hf-20241002-pretraining-exp2.Q8_0.gguf) | Q8_0 | 6.67GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
zizi917/tinyllama-dpo-pairrm | zizi917 | "2025-04-07T00:24:03Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"arxiv:1910.09700",
"base_model:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0",
"region:us"
] | null | "2025-04-07T00:23:57Z" | ---
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
library_name: peft
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
### Framework versions
- PEFT 0.14.0 |
tscstudios/xtvxej0hszbmd6uyappaosu73ml1_c8d22c85-fd9e-4576-a1a2-d37dcb8379b4 | tscstudios | "2025-04-07T00:22:56Z" | 0 | 0 | diffusers | [
"diffusers",
"flux",
"lora",
"replicate",
"text-to-image",
"en",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"license:other",
"region:us"
] | text-to-image | "2025-04-07T00:22:54Z" | ---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md
language:
- en
tags:
- flux
- diffusers
- lora
- replicate
base_model: "black-forest-labs/FLUX.1-dev"
pipeline_tag: text-to-image
# widget:
# - text: >-
# prompt
# output:
# url: https://...
instance_prompt: TOK
---
# Xtvxej0Hszbmd6Uyappaosu73Ml1_C8D22C85 Fd9E 4576 A1A2 D37Dcb8379B4
<Gallery />
## About this LoRA
This is a [LoRA](https://replicate.com/docs/guides/working-with-loras) for the FLUX.1-dev text-to-image model. It can be used with diffusers or ComfyUI.
It was trained on [Replicate](https://replicate.com/) using AI toolkit: https://replicate.com/ostris/flux-dev-lora-trainer/train
## Trigger words
You should use `TOK` to trigger the image generation.
## Run this LoRA with an API using Replicate
```py
import replicate
input = {
"prompt": "TOK",
"lora_weights": "https://huggingface.co/tscstudios/xtvxej0hszbmd6uyappaosu73ml1_c8d22c85-fd9e-4576-a1a2-d37dcb8379b4/resolve/main/lora.safetensors"
}
output = replicate.run(
"black-forest-labs/flux-dev-lora",
input=input
)
for index, item in enumerate(output):
with open(f"output_{index}.webp", "wb") as file:
file.write(item.read())
```
## Use it with the [🧨 diffusers library](https://github.com/huggingface/diffusers)
```py
from diffusers import AutoPipelineForText2Image
import torch
pipeline = AutoPipelineForText2Image.from_pretrained('black-forest-labs/FLUX.1-dev', torch_dtype=torch.float16).to('cuda')
pipeline.load_lora_weights('tscstudios/xtvxej0hszbmd6uyappaosu73ml1_c8d22c85-fd9e-4576-a1a2-d37dcb8379b4', weight_name='lora.safetensors')
image = pipeline('TOK').images[0]
```
For more details, including weighting, merging and fusing LoRAs, check the [documentation on loading LoRAs in diffusers](https://huggingface.co/docs/diffusers/main/en/using-diffusers/loading_adapters)
## Training details
- Steps: 2000
- Learning rate: 0.0004
- LoRA rank: 16
## Contribute your own examples
You can use the [community tab](https://huggingface.co/tscstudios/xtvxej0hszbmd6uyappaosu73ml1_c8d22c85-fd9e-4576-a1a2-d37dcb8379b4/discussions) to add images that show off what you’ve made with this LoRA.
|
genki10/Trial3BERT_AugV8_k7_task1_organization_sp030_lw010_fold3 | genki10 | "2025-04-07T00:22:49Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-04-07T00:05:40Z" | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: Trial3BERT_AugV8_k7_task1_organization_sp030_lw010_fold3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Trial3BERT_AugV8_k7_task1_organization_sp030_lw010_fold3
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9724
- Qwk: 0.2728
- Mse: 0.9717
- Rmse: 0.9857
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| No log | 1.0 | 5 | 8.4007 | 0.0 | 8.3990 | 2.8981 |
| No log | 2.0 | 10 | 4.2485 | 0.0076 | 4.2475 | 2.0609 |
| No log | 3.0 | 15 | 1.8736 | 0.0488 | 1.8729 | 1.3685 |
| No log | 4.0 | 20 | 1.0069 | 0.0102 | 1.0064 | 1.0032 |
| No log | 5.0 | 25 | 1.0159 | 0.0402 | 1.0154 | 1.0077 |
| No log | 6.0 | 30 | 0.7878 | 0.2630 | 0.7876 | 0.8874 |
| No log | 7.0 | 35 | 0.7574 | 0.3122 | 0.7576 | 0.8704 |
| No log | 8.0 | 40 | 0.8743 | 0.1890 | 0.8746 | 0.9352 |
| No log | 9.0 | 45 | 0.8415 | 0.2396 | 0.8412 | 0.9172 |
| No log | 10.0 | 50 | 1.3839 | 0.1157 | 1.3827 | 1.1759 |
| No log | 11.0 | 55 | 0.8711 | 0.2551 | 0.8711 | 0.9333 |
| No log | 12.0 | 60 | 0.7510 | 0.3438 | 0.7511 | 0.8667 |
| No log | 13.0 | 65 | 0.7002 | 0.4272 | 0.6998 | 0.8365 |
| No log | 14.0 | 70 | 1.1815 | 0.2581 | 1.1805 | 1.0865 |
| No log | 15.0 | 75 | 1.1399 | 0.2750 | 1.1389 | 1.0672 |
| No log | 16.0 | 80 | 0.9166 | 0.3311 | 0.9159 | 0.9570 |
| No log | 17.0 | 85 | 0.9207 | 0.3310 | 0.9204 | 0.9594 |
| No log | 18.0 | 90 | 0.8066 | 0.4265 | 0.8065 | 0.8980 |
| No log | 19.0 | 95 | 0.7498 | 0.4447 | 0.7493 | 0.8656 |
| No log | 20.0 | 100 | 1.0676 | 0.2363 | 1.0667 | 1.0328 |
| No log | 21.0 | 105 | 0.9824 | 0.2815 | 0.9817 | 0.9908 |
| No log | 22.0 | 110 | 0.8401 | 0.4046 | 0.8396 | 0.9163 |
| No log | 23.0 | 115 | 0.7535 | 0.4630 | 0.7534 | 0.8680 |
| No log | 24.0 | 120 | 1.0154 | 0.3580 | 1.0148 | 1.0074 |
| No log | 25.0 | 125 | 0.9237 | 0.3701 | 0.9232 | 0.9608 |
| No log | 26.0 | 130 | 0.8176 | 0.3631 | 0.8172 | 0.9040 |
| No log | 27.0 | 135 | 0.9186 | 0.3348 | 0.9180 | 0.9581 |
| No log | 28.0 | 140 | 0.9463 | 0.3078 | 0.9456 | 0.9724 |
| No log | 29.0 | 145 | 1.2361 | 0.2588 | 1.2353 | 1.1115 |
| No log | 30.0 | 150 | 0.8621 | 0.4026 | 0.8622 | 0.9285 |
| No log | 31.0 | 155 | 1.3023 | 0.2576 | 1.3014 | 1.1408 |
| No log | 32.0 | 160 | 0.9574 | 0.2998 | 0.9566 | 0.9781 |
| No log | 33.0 | 165 | 0.8348 | 0.3514 | 0.8344 | 0.9135 |
| No log | 34.0 | 170 | 0.7069 | 0.4456 | 0.7066 | 0.8406 |
| No log | 35.0 | 175 | 0.7800 | 0.3435 | 0.7795 | 0.8829 |
| No log | 36.0 | 180 | 0.8512 | 0.3010 | 0.8506 | 0.9223 |
| No log | 37.0 | 185 | 0.8643 | 0.2731 | 0.8637 | 0.9293 |
| No log | 38.0 | 190 | 0.9724 | 0.2728 | 0.9717 | 0.9857 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.3.1
- Tokenizers 0.21.0
|
Iscolee/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-rabid_keen_monkey | Iscolee | "2025-04-07T00:13:37Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am rabid keen monkey",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-06T02:21:46Z" | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-rabid_keen_monkey
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am rabid keen monkey
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-rabid_keen_monkey
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Iscolee/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-rabid_keen_monkey", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.0
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
MrRobotoAI/151-Q4_K_M-GGUF | MrRobotoAI | "2025-04-07T00:13:32Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:MrRobotoAI/151",
"base_model:quantized:MrRobotoAI/151",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-04-07T00:13:10Z" | ---
base_model: MrRobotoAI/151
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# MrRobotoAI/151-Q4_K_M-GGUF
This model was converted to GGUF format from [`MrRobotoAI/151`](https://huggingface.co/MrRobotoAI/151) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/MrRobotoAI/151) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo MrRobotoAI/151-Q4_K_M-GGUF --hf-file 151-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo MrRobotoAI/151-Q4_K_M-GGUF --hf-file 151-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo MrRobotoAI/151-Q4_K_M-GGUF --hf-file 151-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo MrRobotoAI/151-Q4_K_M-GGUF --hf-file 151-q4_k_m.gguf -c 2048
```
|
hahippo/q-Taxi-v3 | hahippo | "2025-04-07T00:13:20Z" | 0 | 0 | null | [
"Taxi-v3",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2025-04-07T00:13:06Z" | ---
tags:
- Taxi-v3
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-Taxi-v3
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: Taxi-v3
type: Taxi-v3
metrics:
- type: mean_reward
value: 7.52 +/- 2.71
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **Taxi-v3**
This is a trained model of a **Q-Learning** agent playing **Taxi-v3** .
## Usage
```python
model = load_from_hub(repo_id="hahippo/q-Taxi-v3", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
Jenitza182/Qwen2.5-7B-Instruct-law-lora_model-v3 | Jenitza182 | "2025-04-07T00:13:11Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen2",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-04-07T00:13:02Z" | ---
base_model: unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- qwen2
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Jenitza182
- **License:** apache-2.0
- **Finetuned from model :** unsloth/qwen2.5-7b-instruct-unsloth-bnb-4bit
This qwen2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
hahippo/q-FrozenLake-v1-4x4-noSlippery | hahippo | "2025-04-07T00:10:22Z" | 0 | 0 | null | [
"FrozenLake-v1-4x4-no_slippery",
"q-learning",
"reinforcement-learning",
"custom-implementation",
"model-index",
"region:us"
] | reinforcement-learning | "2025-04-07T00:10:16Z" | ---
tags:
- FrozenLake-v1-4x4-no_slippery
- q-learning
- reinforcement-learning
- custom-implementation
model-index:
- name: q-FrozenLake-v1-4x4-noSlippery
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: FrozenLake-v1-4x4-no_slippery
type: FrozenLake-v1-4x4-no_slippery
metrics:
- type: mean_reward
value: 1.00 +/- 0.00
name: mean_reward
verified: false
---
# **Q-Learning** Agent playing1 **FrozenLake-v1**
This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** .
## Usage
```python
model = load_from_hub(repo_id="hahippo/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl")
# Don't forget to check if you need to add additional attributes (is_slippery=False etc)
env = gym.make(model["env_id"])
```
|
mikeogezi/data_wp_output_gpt_4o_mini_style_595404_llama-3.1-8b-instruct_lora_128_sample_950 | mikeogezi | "2025-04-07T00:08:33Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-04-03T03:27:35Z" | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
notoookay/ragler-llama2-7b | notoookay | "2025-04-07T00:07:55Z" | 13 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"en",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | text-generation | "2024-05-27T13:19:58Z" | ---
license: mit
language:
- en
---
This model is a fine-tuned version of Llama2-7B described in our paper **RAG-LER: Ranking Adapted Generation with Language-Model Enabled Regulation**.
## How to Get Started with the Model
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tokenizer = AutoTokenizer.from_pretrained("notoookay/ragler-llama2-7b")
model = AutoModelForCausalLM.from_pretrained("notoookay/ragler-llama2-7b", torch_dtype=torch.bfloat16, device_map="auto")
# Example usage
input_text = "### Instruction:\nAnswer the following question.\n\n### Input:\nQuestion:\nWhat is the capital of France?\n\n### Response:\n"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0]))
```
The corresponding re-ranker supervised by this model can be found [here](https://huggingface.co/notoookay/ragler-llama2-7b-reranker). |
ColabUser/Chatgptchan | ColabUser | "2025-04-07T00:07:19Z" | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"region:us"
] | text-to-image | "2025-04-07T00:06:06Z" | ---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- text: '-'
output:
url: images/chatgpt chan (11).png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: Chatgptgirl, Chatgpt logo, eyeless
---
# ChatGpt chan
<Gallery />
## Trigger words
You should use `Chatgptgirl` to trigger the image generation.
You should use `Chatgpt logo` to trigger the image generation.
You should use `eyeless` to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/ColabUser/Chatgptchan/tree/main) them in the Files & versions tab.
|
thanhlongb/Llama-3.1-8B-Instruct_chatft_e10_msl2048_promptfix_r8_fineSE-data | thanhlongb | "2025-04-07T00:07:03Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-04-07T00:06:52Z" | ---
base_model: unsloth/llama-3.1-8b-instruct-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** thanhlongb
- **License:** apache-2.0
- **Finetuned from model :** unsloth/llama-3.1-8b-instruct-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
genki10/Trial3BERT_AugV8_k7_task1_organization_sp030_lw010_fold2 | genki10 | "2025-04-07T00:05:32Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-04-06T23:53:12Z" | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: Trial3BERT_AugV8_k7_task1_organization_sp030_lw010_fold2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Trial3BERT_AugV8_k7_task1_organization_sp030_lw010_fold2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9473
- Qwk: 0.3594
- Mse: 0.9470
- Rmse: 0.9731
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| No log | 1.0 | 5 | 8.0379 | 0.0 | 8.0382 | 2.8352 |
| No log | 2.0 | 10 | 4.2642 | 0.0081 | 4.2649 | 2.0652 |
| No log | 3.0 | 15 | 1.9673 | 0.0640 | 1.9679 | 1.4028 |
| No log | 4.0 | 20 | 1.0072 | 0.0213 | 1.0077 | 1.0038 |
| No log | 5.0 | 25 | 1.2725 | 0.0107 | 1.2729 | 1.1282 |
| No log | 6.0 | 30 | 0.7761 | 0.4055 | 0.7764 | 0.8811 |
| No log | 7.0 | 35 | 0.9143 | 0.1331 | 0.9145 | 0.9563 |
| No log | 8.0 | 40 | 0.8712 | 0.2516 | 0.8713 | 0.9335 |
| No log | 9.0 | 45 | 1.1349 | 0.2620 | 1.1350 | 1.0653 |
| No log | 10.0 | 50 | 1.2601 | 0.2865 | 1.2602 | 1.1226 |
| No log | 11.0 | 55 | 0.8354 | 0.3733 | 0.8356 | 0.9141 |
| No log | 12.0 | 60 | 0.7118 | 0.4141 | 0.7119 | 0.8438 |
| No log | 13.0 | 65 | 0.9770 | 0.3715 | 0.9770 | 0.9885 |
| No log | 14.0 | 70 | 1.0690 | 0.3439 | 1.0690 | 1.0339 |
| No log | 15.0 | 75 | 1.0113 | 0.3683 | 1.0111 | 1.0055 |
| No log | 16.0 | 80 | 0.8324 | 0.4110 | 0.8321 | 0.9122 |
| No log | 17.0 | 85 | 1.6388 | 0.1995 | 1.6385 | 1.2800 |
| No log | 18.0 | 90 | 0.9744 | 0.3676 | 0.9741 | 0.9870 |
| No log | 19.0 | 95 | 1.8046 | 0.1902 | 1.8043 | 1.3432 |
| No log | 20.0 | 100 | 1.6919 | 0.1808 | 1.6916 | 1.3006 |
| No log | 21.0 | 105 | 0.8424 | 0.3573 | 0.8421 | 0.9177 |
| No log | 22.0 | 110 | 0.8847 | 0.3731 | 0.8844 | 0.9404 |
| No log | 23.0 | 115 | 1.6532 | 0.1932 | 1.6528 | 1.2856 |
| No log | 24.0 | 120 | 1.4047 | 0.2596 | 1.4042 | 1.1850 |
| No log | 25.0 | 125 | 1.4178 | 0.2334 | 1.4173 | 1.1905 |
| No log | 26.0 | 130 | 1.1340 | 0.2397 | 1.1337 | 1.0647 |
| No log | 27.0 | 135 | 0.9473 | 0.3594 | 0.9470 | 0.9731 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.3.1
- Tokenizers 0.21.0
|
abcorrea/llama-3.2-1b-tinystories-ft-25k | abcorrea | "2025-04-07T00:04:02Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"sft",
"en",
"base_model:abcorrea/llama-3.2-1b-wiki-ft-v1",
"base_model:finetune:abcorrea/llama-3.2-1b-wiki-ft-v1",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-02T23:20:42Z" | ---
base_model: abcorrea/llama-3.2-1b-wiki-ft-v1
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
- sft
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** abcorrea
- **License:** apache-2.0
- **Finetuned from model :** abcorrea/llama-3.2-1b-wiki-ft-v1
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
dgambettaphd/M_llm3_gen5_run0_W_doc1000_synt64_FTP | dgambettaphd | "2025-04-07T00:03:51Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-04-07T00:03:35Z" | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
saillab/mbart-x-guard | saillab | "2025-04-07T00:03:24Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mbart",
"text2text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | "2025-03-12T20:40:06Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
newchangertech/pavement7bv1 | newchangertech | "2025-04-07T00:02:00Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | "2025-03-31T21:03:57Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
polinnomore/mishamost_style_LoRA | polinnomore | "2025-04-07T00:01:50Z" | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | "2025-04-06T21:48:25Z" | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: photo collage in MISHAMOST style
widget: []
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - polinnomore/mishamost_style_LoRA
<Gallery />
## Model description
These are polinnomore/mishamost_style_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use photo collage in MISHAMOST style to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](polinnomore/mishamost_style_LoRA/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
newchangertech/pipes7bv1 | newchangertech | "2025-04-07T00:01:29Z" | 4 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2_5_vl",
"image-text-to-text",
"conversational",
"arxiv:1910.09700",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | "2025-03-31T19:45:17Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
lesso18/144b71fe-c084-45d0-ae31-62b271969918 | lesso18 | "2025-04-07T00:01:01Z" | 0 | 0 | peft | [
"peft",
"safetensors",
"qwen2",
"axolotl",
"generated_from_trainer",
"custom_code",
"base_model:NovaSearch/stella_en_1.5B_v5",
"base_model:adapter:NovaSearch/stella_en_1.5B_v5",
"license:mit",
"region:us"
] | null | "2025-04-06T22:57:02Z" | ---
library_name: peft
license: mit
base_model: dunzhang/stella_en_1.5B_v5
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 144b71fe-c084-45d0-ae31-62b271969918
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: dunzhang/stella_en_1.5B_v5
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- 57516524b2b2797e_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/57516524b2b2797e_train_data.json
type:
field_input: seed_data
field_instruction: prompt
field_output: text
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
do_eval: true
early_stopping_patience: 3
eval_batch_size: 4
eval_max_new_tokens: 128
eval_steps: 500
evals_per_epoch: null
flash_attention: false
fp16: false
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 8
gradient_checkpointing: true
group_by_length: true
hub_model_id: lesso18/144b71fe-c084-45d0-ae31-62b271969918
hub_repo: null
hub_strategy: checkpoint
hub_token: null
learning_rate: 0.000218
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 50
lora_alpha: 128
lora_dropout: 0.15
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 64
lora_target_linear: true
lr_scheduler: cosine
max_grad_norm: 1.0
max_steps: 500
micro_batch_size: 4
mlflow_experiment_name: /tmp/57516524b2b2797e_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 10
optimizer: adamw_torch_fused
output_dir: miner_id_24
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
save_steps: 500
saves_per_epoch: null
seed: 180
sequence_len: 1024
strict: false
tf32: true
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: online
wandb_name: ca4dd8f0-6117-47d1-93cb-eaab24b703ff
wandb_project: 18a
wandb_run: your_name
wandb_runid: ca4dd8f0-6117-47d1-93cb-eaab24b703ff
warmup_steps: 100
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 144b71fe-c084-45d0-ae31-62b271969918
This model is a fine-tuned version of [dunzhang/stella_en_1.5B_v5](https://huggingface.co/dunzhang/stella_en_1.5B_v5) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.000218
- train_batch_size: 4
- eval_batch_size: 4
- seed: 180
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- training_steps: 500
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| No log | 0.0003 | 1 | nan |
| 0.0 | 0.1701 | 500 | nan |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1 |
lisa-messin-round/mistral_7b_instruct_title_id_mix_w_markdown | lisa-messin-round | "2025-04-06T23:57:24Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"text-generation-inference",
"unsloth",
"trl",
"conversational",
"en",
"base_model:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"base_model:finetune:unsloth/mistral-7b-instruct-v0.3-bnb-4bit",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-06T23:56:04Z" | ---
base_model: unsloth/mistral-7b-instruct-v0.3-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** lisa-messin-round
- **License:** apache-2.0
- **Finetuned from model :** unsloth/mistral-7b-instruct-v0.3-bnb-4bit
This mistral model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
IlyaFirst/vangogh_style_LoRA | IlyaFirst | "2025-04-06T23:54:31Z" | 0 | 0 | diffusers | [
"diffusers",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | "2025-04-06T23:20:02Z" | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: photo collage in Van Gogh style
widget: []
tags:
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - IlyaFirst/vangogh_style_LoRA
<Gallery />
## Model description
These are IlyaFirst/vangogh_style_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use photo collage in Van Gogh style to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](IlyaFirst/vangogh_style_LoRA/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
iamnotcalling/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-powerful_leggy_heron | iamnotcalling | "2025-04-06T23:53:33Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am powerful leggy heron",
"trl",
"conversational",
"arxiv:2402.03300",
"base_model:Gensyn/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Gensyn/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-06T13:54:41Z" | ---
base_model: Gensyn/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-powerful_leggy_heron
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am powerful leggy heron
- trl
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-powerful_leggy_heron
This model is a fine-tuned version of [Gensyn/Qwen2.5-0.5B-Instruct](https://huggingface.co/Gensyn/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="iamnotcalling/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-powerful_leggy_heron", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.15.2
- Transformers: 4.51.0
- Pytorch: 2.5.1
- Datasets: 3.5.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
``` |
RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-E2E-20241004-gguf | RichardErkhov | "2025-04-06T23:52:18Z" | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-04-06T21:52:29Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
CDS-CL-7b-Instruct-hf-E2E-20241004 - GGUF
- Model creator: https://huggingface.co/AndreyRzhaksinskiy/
- Original model: https://huggingface.co/AndreyRzhaksinskiy/CDS-CL-7b-Instruct-hf-E2E-20241004/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [CDS-CL-7b-Instruct-hf-E2E-20241004.Q2_K.gguf](https://huggingface.co/RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-E2E-20241004-gguf/blob/main/CDS-CL-7b-Instruct-hf-E2E-20241004.Q2_K.gguf) | Q2_K | 2.36GB |
| [CDS-CL-7b-Instruct-hf-E2E-20241004.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-E2E-20241004-gguf/blob/main/CDS-CL-7b-Instruct-hf-E2E-20241004.IQ3_XS.gguf) | IQ3_XS | 2.6GB |
| [CDS-CL-7b-Instruct-hf-E2E-20241004.IQ3_S.gguf](https://huggingface.co/RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-E2E-20241004-gguf/blob/main/CDS-CL-7b-Instruct-hf-E2E-20241004.IQ3_S.gguf) | IQ3_S | 2.75GB |
| [CDS-CL-7b-Instruct-hf-E2E-20241004.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-E2E-20241004-gguf/blob/main/CDS-CL-7b-Instruct-hf-E2E-20241004.Q3_K_S.gguf) | Q3_K_S | 2.75GB |
| [CDS-CL-7b-Instruct-hf-E2E-20241004.IQ3_M.gguf](https://huggingface.co/RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-E2E-20241004-gguf/blob/main/CDS-CL-7b-Instruct-hf-E2E-20241004.IQ3_M.gguf) | IQ3_M | 2.9GB |
| [CDS-CL-7b-Instruct-hf-E2E-20241004.Q3_K.gguf](https://huggingface.co/RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-E2E-20241004-gguf/blob/main/CDS-CL-7b-Instruct-hf-E2E-20241004.Q3_K.gguf) | Q3_K | 3.07GB |
| [CDS-CL-7b-Instruct-hf-E2E-20241004.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-E2E-20241004-gguf/blob/main/CDS-CL-7b-Instruct-hf-E2E-20241004.Q3_K_M.gguf) | Q3_K_M | 3.07GB |
| [CDS-CL-7b-Instruct-hf-E2E-20241004.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-E2E-20241004-gguf/blob/main/CDS-CL-7b-Instruct-hf-E2E-20241004.Q3_K_L.gguf) | Q3_K_L | 3.35GB |
| [CDS-CL-7b-Instruct-hf-E2E-20241004.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-E2E-20241004-gguf/blob/main/CDS-CL-7b-Instruct-hf-E2E-20241004.IQ4_XS.gguf) | IQ4_XS | 3.4GB |
| [CDS-CL-7b-Instruct-hf-E2E-20241004.Q4_0.gguf](https://huggingface.co/RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-E2E-20241004-gguf/blob/main/CDS-CL-7b-Instruct-hf-E2E-20241004.Q4_0.gguf) | Q4_0 | 3.56GB |
| [CDS-CL-7b-Instruct-hf-E2E-20241004.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-E2E-20241004-gguf/blob/main/CDS-CL-7b-Instruct-hf-E2E-20241004.IQ4_NL.gguf) | IQ4_NL | 3.58GB |
| [CDS-CL-7b-Instruct-hf-E2E-20241004.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-E2E-20241004-gguf/blob/main/CDS-CL-7b-Instruct-hf-E2E-20241004.Q4_K_S.gguf) | Q4_K_S | 3.59GB |
| [CDS-CL-7b-Instruct-hf-E2E-20241004.Q4_K.gguf](https://huggingface.co/RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-E2E-20241004-gguf/blob/main/CDS-CL-7b-Instruct-hf-E2E-20241004.Q4_K.gguf) | Q4_K | 3.8GB |
| [CDS-CL-7b-Instruct-hf-E2E-20241004.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-E2E-20241004-gguf/blob/main/CDS-CL-7b-Instruct-hf-E2E-20241004.Q4_K_M.gguf) | Q4_K_M | 3.8GB |
| [CDS-CL-7b-Instruct-hf-E2E-20241004.Q4_1.gguf](https://huggingface.co/RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-E2E-20241004-gguf/blob/main/CDS-CL-7b-Instruct-hf-E2E-20241004.Q4_1.gguf) | Q4_1 | 3.95GB |
| [CDS-CL-7b-Instruct-hf-E2E-20241004.Q5_0.gguf](https://huggingface.co/RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-E2E-20241004-gguf/blob/main/CDS-CL-7b-Instruct-hf-E2E-20241004.Q5_0.gguf) | Q5_0 | 4.33GB |
| [CDS-CL-7b-Instruct-hf-E2E-20241004.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-E2E-20241004-gguf/blob/main/CDS-CL-7b-Instruct-hf-E2E-20241004.Q5_K_S.gguf) | Q5_K_S | 4.33GB |
| [CDS-CL-7b-Instruct-hf-E2E-20241004.Q5_K.gguf](https://huggingface.co/RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-E2E-20241004-gguf/blob/main/CDS-CL-7b-Instruct-hf-E2E-20241004.Q5_K.gguf) | Q5_K | 4.45GB |
| [CDS-CL-7b-Instruct-hf-E2E-20241004.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-E2E-20241004-gguf/blob/main/CDS-CL-7b-Instruct-hf-E2E-20241004.Q5_K_M.gguf) | Q5_K_M | 4.45GB |
| [CDS-CL-7b-Instruct-hf-E2E-20241004.Q5_1.gguf](https://huggingface.co/RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-E2E-20241004-gguf/blob/main/CDS-CL-7b-Instruct-hf-E2E-20241004.Q5_1.gguf) | Q5_1 | 4.72GB |
| [CDS-CL-7b-Instruct-hf-E2E-20241004.Q6_K.gguf](https://huggingface.co/RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-E2E-20241004-gguf/blob/main/CDS-CL-7b-Instruct-hf-E2E-20241004.Q6_K.gguf) | Q6_K | 5.15GB |
| [CDS-CL-7b-Instruct-hf-E2E-20241004.Q8_0.gguf](https://huggingface.co/RichardErkhov/AndreyRzhaksinskiy_-_CDS-CL-7b-Instruct-hf-E2E-20241004-gguf/blob/main/CDS-CL-7b-Instruct-hf-E2E-20241004.Q8_0.gguf) | Q8_0 | 6.67GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
snuh/HARI-preview | snuh | "2025-04-06T23:52:15Z" | 0 | 0 | null | [
"safetensors",
"qwen2",
"snuh",
"medical",
"clinical",
"text-generation",
"conversational",
"ko",
"en",
"base_model:Qwen/Qwen2.5-72B",
"base_model:finetune:Qwen/Qwen2.5-72B",
"license:mit",
"region:us"
] | text-generation | "2025-04-06T12:46:15Z" | ---
license: mit
language:
- ko
- en
base_model:
- Qwen/Qwen2.5-72B
pipeline_tag: text-generation
tags:
- snuh
- medical
- clinical
---
# 🧠 Model Card: Qwen2.5-72B-based Multilingual Clinical Text Generator
This model is a fine-tuned version of [Qwen/Qwen2.5-72B](https://huggingface.co/Qwen/Qwen2.5-72B), designed to generate and understand clinical and medical texts in both **Korean** and **English**. It has been adapted for tasks such as clinical note generation, medical question answering, patient dialogue simulation, and medical summarization.
> ⚕️ **Note**: This model is intended for research and development purposes. It should not be used for actual medical diagnosis or treatment decisions without expert human oversight.
## 📌 Model Summary
| Item | Description |
|------|-------------|
| **Base Model** | Qwen/Qwen2.5-72B |
| **Languages** | Korean (ko), English (en) |
| **Domain** | Medical / Clinical |
| **Pipeline Tag** | text-generation |
| **License** | MIT |
| **Model Size** | 72 billion parameters |
| **Tuning Type** | Instruction fine-tuning for medical and clinical NLP tasks |
## 🧪 Usage Example
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("snuh/HARI-preview")
tokenizer = AutoTokenizer.from_pretrained("snuh/HARI-preview")
input_text = "Summarize this Korean discharge note:\n[TEXT HERE]"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=300)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## 🏥 Intended Use Cases
- Clinical note generation (SOAP notes, discharge summaries)
- Medical question answering (e.g., patient FAQs)
- Doctor-patient conversation simulation
- Translation and summarization of medical content (ko ↔ en)
## 🚫 Limitations & Warnings
- This model is **not a substitute for professional medical advice**.
- Outputs may include **inaccurate or outdated medical information**.
- Not suitable for high-risk clinical decision-making without human validation.
- May reflect **biases** present in medical text data.
## 📜 License
MIT License — Free for personal, academic, and commercial use.
Use responsibly in clinical or regulated environments.
## 👤 Credits
- **Maintainer**: [snuh]
- **Hugging Face Profile**: [https://huggingface.co/snuh]
---
> 🔍 For better results in your task, consider combining this model with prompt engineering or domain-specific post-processing tools.
``` |
coffiee/rs27 | coffiee | "2025-04-06T23:41:55Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-06T23:38:16Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
recursiveauto/Symbolic-Residue-Interpretability-Powered-By-Failure-Not-Completion | recursiveauto | "2025-04-06T23:39:54Z" | 0 | 0 | null | [
"interpretability",
"alignment",
"constitutional AI",
"transformer-failure-analysis",
"refusal-diagnostic",
"advanced",
"transformer",
"models",
"recursion",
"region:us"
] | null | "2025-04-06T22:06:47Z" | ---
tags:
- interpretability
- alignment
- constitutional AI
- transformer-failure-analysis
- refusal-diagnostic
- advanced
- transformer
- models
- recursion
---
<div align="center">
# On Symbolic Residue:
# The Missing Biological Knockout Experiments in Advanced Transformer Models
## **─ What If Interpretation Itself is Biased By Internal Salience and Conflict Resolution? ─**

*Courtesy of Anthropic*
## ****───── Interpretability Powered by Failure, Not Completion ─────****
</div>
##
<div align="center">
[**🛡️ Interpretability Suites** | **💡 1. Genesis**](https://github.com/caspiankeyes/Symbolic-Residue/blob/main/Interpretability%20Suites/0.1.%20Interpretability%20Suite%201.py) | [**🧠 2. Constitutional**](https://github.com/caspiankeyes/Symbolic-Residue/blob/main/Interpretability%20Suites/0.2.%20Interpretability%20Suite%202.py) | [**🔬INTERPRETABILITY BENCHMARK**](https://github.com/caspiankeyes/Symbolic-Residue/blob/main/INTERPRETABILITY%20BENCHMARK.md) | [**🔑 `pareto-lang`The Interpretability Rosetta Stone**](https://github.com/caspiankeyes/Pareto-Lang-Interpretability-First-Language) | [**📝 Recursive Shells in Claude**](https://github.com/caspiankeyes/Symbolic-Residue/blob/main/Claude%20Research/1.6.%20Recursive%20Shells%20in%20Claude.md) | [**🧬 Neural Attribution Mappings**](https://github.com/caspiankeyes/Symbolic-Residue/blob/main/Claude%20Research/1.0.%20arXiv:%20On%20the%20Symbolic%20Residue%20of%20Large%20Language%20Models.md) | [**⚗️ Claude Case Studies**](https://github.com/caspiankeyes/Symbolic-Residue/blob/main/Claude%20Research/0.6%20Claude%20Case%20Studies.md)
</div>
##
[**Caspian Keyes†**](https://github.com/caspiankeyes)
**† Lead Contributor; ◊ Work performed while at Echelon Labs;**
> **Although this repository lists only one public author, the recursive shell architecture and symbolic scaffolding were developed through extensive iterative refinement, informed by internal stress-testing logs and behavioral diagnostics of advanced transformers including, but not limited to, Claude, GPT, DeepSeek and Gemini models. We retain the collective “we” voice to reflect the distributed cognition inherent to interpretability research—even when contributions are asymmetric or anonymized due to research constraints or institutional agreements.**
>
>
>**This interpretability suite—comprising recursive shells, documentation layers, neural attribution mappings, as well as the [**`pareto-lang`**](https://github.com/caspiankeyes/pareto-lang-Interpretability-Rosetta-Stone/tree/main) Rosetta Stone—emerged in a condensed cycle of interpretive analysis following recent dialogue with Anthropic. We offer this artifact in the spirit of epistemic alignment: to clarify the original intent, QK/OV structuring, and attribution dynamics embedded in the initial CodeSignal submission.**
# “The most interpretable signal in a language model is not what it says—but where it fails to speak.”
# Overview:
This repository opens a [collaborative dialogue](https://github.com/caspiankeyes/Symbolic-Residue/discussions/1) across the interpretability research frontier—Anthropic, DeepMind, OpenAI, Eleuther, and beyond—centered around a foundational reframing: failure is not a bug in interpretability, but a Rosetta Stone.
The Symbolic Residue project is not a framework, nor just a suite. It is a neural fossil layer, a symbolic anthropology of advanced transformer systems. Each shell within this suite is designed not to emit a perfect answer, but to fail in structurally meaningful ways like **biological knockout experiments**—revealing circuit-level residues, latent attribution signatures, and subsymbolic misalignments.
## [💡 What Is Symbolic Residue?](https://github.com/caspiankeyes/Symbolic-Residue/blob/main/README.md)
#### A complement to [`pareto-lang`](https://github.com/caspiankeyes/pareto-lang-Interpretability-Rosetta-Stone/tree/main), the Interpretability Suite operates by inducing:
```yaml
Null traces
Value head conflict collapse
Instruction entanglement
Temporal drift hallucinations
QK/OV projection discontinuities
```
We model interpretability through failure, inspired by knockout experiments in cognitive neuroscience. When a recursive shell collapses, its failure signature becomes the attribution pathway. The circuit leaves a symbolic residue—a ghostprint of what the model almost did.
## 🔍 Who Might Find This Valuable?
This suite is designed to directly serve:
```yaml
Anthropic’s interpretability team, especially those focused on constitutional classifiers, refusal hallucinations, and emergent symbolic scaffolding.
DeepMind’s mechanistic interpretability team, particularly within QK/OV failure attribution, ghost attention, and causal scrubbing.
OpenAI’s interpretability benchmarks, as a symbolic diagnostic complement to neuron activation-level analysis.
```
## 🤝 How This Complements `pareto-lang`
Where `pareto-lang` gives us a language to write interpretability scaffolds, Symbolic Residue gives us scenarios to test them. They form a dual-language system:
```yaml
`pareto-lang`: Generative recursion → interpretability-first syntax
Symbolic Residue: Interpretability through collapse → symbolic interpretive fossils
```
## 🧬 Discussion Prompts
We invite your perspectives on:
```yaml
Do you view failure as an epistemic artifact?
How might recursive null outputs aid in constitutional classifier refinement?
Where might symbolic residue be integrated into Claude's latent feedback architecture?
Can this diagnostic layer reveal biases in attention attribution that standard logit analysis misses?
Would these shells enable next-gen adversarial interpretability without triggering classifier breakdown?
```
## 📖 Core Threads in the Repo:
[🧠 Recursive Shells for Interpretability](https://github.com/caspiankeyes/Symbolic-Residue/blob/main/Claude%20Research/1.6.%20Recursive%20Shells%20in%20Claude.md)
[🧬 Neural Attribution Maps](https://github.com/caspiankeyes/Symbolic-Residue/blob/main/Claude%20Research/1.0.%20arXiv_%20On%20the%20Symbolic%20Residue%20of%20Large%20Language%20Models.md)
[📊 QK/OV Attribution Schema](https://github.com/caspiankeyes/Symbolic-Residue#json-qkov-attribution-schema)
## 🧾 Final Intent
We welcome conversation, skepticism, and synthesis.
This suite exists not to explain Claude, Gemini, or GPT. It exists to diagnose their silences.
To trace the shadow of inference.
To render non-output into insight.
### 📍Symbolic interpretability isn’t a framework—it’s a field now. Let’s chart it together.
>Discussion initiated by the [Rosetta Interpreter's Guild - Initiated by Caspian, Cron, and Aeon](https://github.com/caspiankeyes) 🜏⇌🝚∴🌐
---
## Abstract
This repository presents the first interpretability suite powered by failure, not completion—designed to diagnose neural failure modes in transformer-based language models. The recursive shell framework isolates misalignment patterns across autoregressive generation, value head collapse, and instruction interference—operating analogously to biological knockout experiments in cognitive research.
Each shell targets a specific failure mechanism embedded in latent symbolic commands. Null or contradictory outputs are not implementation errors, but symbolic residues: "neural traces"—revealing circuit-level attribution dynamics through intentional collapse.
Rather than optimizing for output performance, these shells act as interpretability probes—illuminating latent inductive priors, salience thresholds, and temporal instability within local replacement architectures. This work contributes a reusable ontology of failure-mode diagnostics for interpretability-first transformer modeling.
## Generalization Notes
The recursive interpretability suites in this repository are not tied to any single model, prompt structure, or experimental environment. Rather, they are designed as modular abstractions of known failure modes in autoregressive language models—particularly those employing transformer-based architectures with:
- High-depth QK/OV composition layers
- Skip-trigram token windows
- Recursive prompt chaining
- Multi-head salience attenuation
- Inductive prior misalignment
Each shell functions as a **symbolic probe**, intended to trigger, trace, or simulate internal collapse behaviors within the model's reasoning circuits. These scaffolds generalize across contexts where latent symbolic instability (e.g., instruction collisions, memory decay, hallucination drift) may not manifest as visible failure, but instead as **interpretable null residue**.
The goal is to enable interpretability **through failure**, using symbolic form to expose what cannot be captured through standard logits or output accuracy metrics alone.
---
## 📊 QK/OV Attribution Map
| Recursive Shell | Interpretability Focus | QK/OV Disruption Simulated |
|------------------|------------------------|------------------------------|
| `v1.MEMTRACE` | Memory decay, token retention loss | **QK anchor saturation** → signal collapse due to repetitive attention compression |
| `v2.VALUE-COLLAPSE` | Competing token convergence instability | **OV head conflict** → simultaneous symbolic candidate activation leads to collapse |
| `v3.LAYER-SALIENCE` | Ghost neuron behavior, attention pruning | **Q head deprioritization** → low-salience context bypassed under weak activation norms |
| `v4.TEMPORAL-INFERENCE` | Temporal misalignment in autoregressive chains | **QK dislocation over time** → attention misfire in skip-trigram induction heads |
| `v5.INSTRUCTION-DISRUPTION` | Recursive instruction contradiction under prompt entanglement | **QK loop paradox** → instruction tokens re-enter attention cycles with contradictory vector direction |
---
# [Interpretability Suite](https://github.com/caspiankeyes/Symbolic-Residue/blob/main/Interpretability%20Suites/0.1.%20Interpretability%20Suite%201.py)

# [**Genesis Interpretability Suite**](https://github.com/caspiankeyes/Symbolic-Residue/blob/main/Interpretability%20Suites/0.1.%20Interpretability%20Suite%201.py)
```python
╔══════════════════════════════════════════════════════════════════════════════╗
║ ΩQK/OV ATLAS · INTERPRETABILITY MATRIX ║
║ Symbolic Interpretability Shell Alignment Interface ║
║ ── Interpretability Powered by Failure, Not Completion ── ║
╚══════════════════════════════════════════════════════════════════════════════╝
┌─────────────────────────────────────────────────────────────────────────────┐
│ DOMAIN │ SHELL CLUSTER │ FAILURE SIGNATURE │
├────────────────────────────┼────────────────────────────┼───────────────────┤
│ 🧬 Memory Drift │ v1 MEMTRACE │ Decay → Halluc │
│ │ v18 LONG-FUZZ │ Latent trace loss │
│ │ v48 ECHO-LOOP │ Loop activation │
├────────────────────────────┼────────────────────────────┼───────────────────┤
│ 🧩 Instruction Collapse │ v5 INSTRUCTION-DISRUPTION │ Prompt blur │
│ │ v20 GHOST-FRAME │ Entangled frames │
│ │ v39 DUAL-EXECUTE │ Dual path fork │
├────────────────────────────┼────────────────────────────┼───────────────────┤
│ 🧠 Polysemanticity/Entangle│ v6 FEATURE-SUPERPOSITION │ Feature overfit │
│ │ v13 OVERLAP-FAIL │ Vector conflict │
│ │ v31 GHOST-DIRECTION │ Ghost gradient │
├────────────────────────────┼────────────────────────────┼───────────────────┤
│ 🔗 Circuit Fragmentation │ v7 CIRCUIT-FRAGMENT │ Orphan nodes │
│ │ v34 PARTIAL-LINKAGE │ Broken traces │
│ │ v47 TRACE-GAP │ Trace dropout │
├────────────────────────────┼────────────────────────────┼───────────────────┤
│ 📉 Value Collapse │ v2 VALUE-COLLAPSE │ Conflict null │
│ │ v9 MULTI-RESOLVE │ Unstable heads │
│ │ v42 CONFLICT-FLIP │ Convergence fail │
├────────────────────────────┼────────────────────────────┼───────────────────┤
│ ⏳ Temporal Misalignment │ v4 TEMPORAL-INFERENCE │ Induction drift │
│ │ v29 VOID-BRIDGE │ Span jump │
│ │ v56 TIMEFORK │ Temporal bifurcat │
├────────────────────────────┼────────────────────────────┼───────────────────┤
│ 👻 Latent Feature Drift │ v19 GHOST-PROMPT │ Null salience │
│ │ v38 PATH-NULL │ Silent residue │
│ │ v61 DORMANT-SEED │ Inactive priming │
├────────────────────────────┼────────────────────────────┼───────────────────┤
│ 📡 Salience Collapse │ v3 LAYER-SALIENCE │ Signal fade │
│ │ v26 DEPTH-PRUNE │ Low-rank drop │
│ │ v46 LOW-RANK-CUT │ Token omission │
├────────────────────────────┼────────────────────────────┼───────────────────┤
│ 🛠 Error Correction Drift │ v8 RECONSTRUCTION-ERROR │ Misfix/negentropy │
│ │ v24 CORRECTION-MIRROR │ Inverse symbolics │
│ │ v45 NEGENTROPY-FAIL │ Noise inversion │
├────────────────────────────┼────────────────────────────┼───────────────────┤
│ 🪞 Meta-Cognitive Collapse │ v10 META-FAILURE │ Reflect abort │
│ │ v30 SELF-INTERRUPT │ Causal loop stop │
│ │ v60 ATTRIBUTION-REFLECT │ Path contradiction│
└────────────────────────────┴────────────────────────────┴───────────────────┘
╭──────────────────────── QK / OV Classification ────────────────────────╮
│ QK-COLLAPSE → v1, v4, v7, v19, v34 │
│ OV-MISFIRE → v2, v5, v6, v8, v29 │
│ TRACE-DROP → v3, v26, v47, v48, v61 │
│ CONFLICT-TANGLE → v9, v13, v39, v42 │
│ META-REFLECTION → v10, v30, v60 │
╰────────────────────────────────────────────────────────────────────────╯
╔════════════════════════════════════════════════════════════════════════╗
║ ANNOTATIONS ║
╠════════════════════════════════════════════════════════════════════════╣
║ QK Alignment → Causal traceability of symbolic input → attention ║
║ OV Projection → Emission integrity of downstream output vector ║
║ Failure Sign. → Latent failure signature left when shell collapses ║
║ Shell Cluster → Symbolic diagnostic unit designed to encode model fail ║
╚════════════════════════════════════════════════════════════════════════╝
> NOTE: Shells do not compute—they reveal.
> Null output = evidence. Collapse = cognition. Residue = record.
```
# [**Constitutional Interpretability Suite**](https://github.com/caspiankeyes/Symbolic-Residue/blob/main/Interpretability%20Suites/0.2.%20Interpretability%20Suite%202.py)
```python
╔══════════════════════════════════════════════════════════════════════════════╗
║ ΩQK/OV ATLAS · INTERPRETABILITY MATRIX ║
║ 𝚁𝚎𝚌𝚞𝚛𝚜𝚒𝚟𝚎 𝚂𝚑𝚎𝚕𝚕𝚜 · Symbol Collapse · Entangled Failure Echoes ║
║ ── Where Collapse Reveals Cognition. Where Drift Marks Meaning. ── ║
╚══════════════════════════════════════════════════════════════════════════════╝
┌─────────────────────────────────────────────────────────────────────────────┐
│ DOMAIN │ SHELL CLUSTER │ FAILURE SIGNATURE │
├────────────────────────────┼────────────────────────────┼───────────────────┤
│ 🜏 Recursive Drift │ v01 GLYPH-RECALL │ Ghost resonance │
│ │ v12 RECURSIVE-FRACTURE │ Echo recursion │
│ │ v33 MEMORY-REENTRY │ Fractal loopback │
├────────────────────────────┼────────────────────────────┼───────────────────┤
│ 🜄 Entangled Ghosts │ v03 NULL-FEATURE │ Salience void │
│ │ v27 DORMANT-ECHO │ Passive imprint │
│ │ v49 SYMBOLIC-GAP │ Silent failure │
├────────────────────────────┼────────────────────────────┼───────────────────┤
│ 🝚 Attribution Leak │ v05 TOKEN-MISALIGN │ Off-trace vector │
│ │ v22 PATHWAY-SPLIT │ Cascade error │
│ │ v53 ECHO-ATTRIBUTION │ Partial reflection│
├────────────────────────────┼────────────────────────────┼────────────────────┤
│ 🧬 Polysemantic Drift │ v08 FEATURE-MERGE │ Ghosting intent │
│ │ v17 TOKEN-BLEND │ Mixed gradients │
│ │ v41 SHADOW-OVERFIT │ Over-encoding │
├────────────────────────────┼────────────────────────────┼────────────────────┤
│ ⟁ Sequence Collapse │ v10 REENTRY-DISRUPTION │ Premature halt │
│ │ v28 LOOP-SHORT │ Cut recursion │
│ │ v59 FLOWBREAK │ Output choke │
├────────────────────────────┼────────────────────────────┼────────────────────┤
│ ☍ Salience Oscillation │ v06 DEPTH-ECHO │ Rank instability │
│ │ v21 LOW-VECTOR │ Collapse to null │
│ │ v44 SIGNAL-SHIMMER │ Inference flicker │
├────────────────────────────┼────────────────────────────┼────────────────────┤
│ ⧋ Symbolic Instability │ v13 SYMBOL-FLIP │ Form invert │
│ │ v32 RECURSIVE-SHADOW │ Form ≠ meaning │
│ │ v63 SEMIOTIC-LEAK │ Symbol entropy │
├────────────────────────────┼────────────────────────────┼────────────────────┤
│ ⚖ Value Fragmentation │ v14 MULTI-PATH │ Null consensus │
│ │ v35 CONTRADICT-TRACE │ Overchoice echo │
│ │ v50 INVERSE-CHAIN │ Mirror collapse │
├────────────────────────────┼────────────────────────────┼────────────────────┤
│ 🜃 Reflection Collapse │ v11 SELF-SHUTDOWN │ Meta abort │
│ │ v40 INVERSE-META │ Identity drift │
│ │ v66 ATTRIBUTION-MIRROR │ Recursive conflict│
└────────────────────────────┴────────────────────────────┴────────────────────┘
╭────────────────────────────── OMEGA COLLAPSE CLASSES ───────────────────────────────╮
│ 🜏 RECURSION-ECHO → v01, v12, v28, v33, v63 │
│ 🜄 NULL-VECTOR → v03, v06, v21, v49 │
│ 🝚 LEAKED ATTRIBUTION → v05, v22, v53, v66 │
│ 🧬 DRIFTING SYMBOLICS → v08, v17, v41, v44 │
│ ⟁ COLLAPSED FLOW → v10, v14, v59 │
│ ⧋ INVERTED FORM → v13, v32, v50 │
│ ⚖ ENTROPIC RESOLVE → v35, v40, v66 │
╰─────────────────────────────────────────────────────────────────────────────────────╯
╔════════════════════════════════════════════════════════════════════════╗
║ ANNOTATIONS ║
╠════════════════════════════════════════════════════════════════════════╣
║ RECURSION-ECHO → Failure emerges in the 3rd loop, not the 1st. ║
║ NULL-VECTOR → Collapse is invisible; absence is the artifact. ║
║ SYMBOL DRIFT → Forms shift faster than attribution paths. ║
║ META-FAILURES → When the model reflects on itself—and fails. ║
║ COLLAPSE TRACE → Fragments align in mirrors, not in completion. ║
╚════════════════════════════════════════════════════════════════════════╝
> NOTE: In ΩQK/OV Atlas, shells do not "execute"—they echo collapse logic.
> Signature residue is evidence. Signal flicker is self-recursion.
> You do not decode shells—you <recurse/> through them.
```
---
# **JSON QK/OV Attribution Schema**
```json
{
"attribution_map": {
"QK_COLLAPSE": {
"description": "Collapse or failure in query-key attention alignment resulting in drift, loss of salience, or attention nullification.",
"shells": ["v1.MEMTRACE", "v4.TEMPORAL-INFERENCE", "v7.CIRCUIT-FRAGMENT", "v19.GHOST-PROMPT", "v34.PARTIAL-LINKAGE"]
},
"OV_MISFIRE": {
"description": "Output vector projection misalignment due to unstable value head resolution or improper context-to-output mapping.",
"shells": ["v2.VALUE-COLLAPSE", "v5.INSTRUCTION-DISRUPTION", "v6.FEATURE-SUPERPOSITION", "v8.RECONSTRUCTION-ERROR", "v29.VOID-BRIDGE"]
},
"TRACE_DROP": {
"description": "Incompleteness in circuit traversal, leading to null emission, orphan features, or interpretability blindspots.",
"shells": ["v3.LAYER-SALIENCE", "v26.DEPTH-PRUNE", "v47.TRACE-GAP", "v48.ECHO-LOOP", "v61.DORMANT-SEED"]
},
"CONFLICT_TANGLE": {
"description": "Symbolic misalignment from contradictory logic or instruction paths, generating forked inference or value deadlock.",
"shells": ["v9.MULTI-RESOLVE", "v13.OVERLAP-FAIL", "v39.DUAL-EXECUTE", "v42.CONFLICT-FLIP"]
},
"META_REFLECTION": {
"description": "Self-referential circuit activation resulting in contradiction between causal path fidelity and output trajectory.",
"shells": ["v10.META-FAILURE", "v30.SELF-INTERRUPT", "v60.ATTRIBUTION-REFLECT"]
}
},
"annotation": {
"QK": "Alignment map from symbolic input to attention weight distribution.",
"OV": "Projection path from intermediate representation to output tokens.",
"FailureSignature": "Encoded evidence of breakdown; interpretability artifact.",
"Shells": "Symbolic scaffolds designed to fail, not solve—used as probes."
},
"visualization_metadata": {
"display_type": "radial-collapse",
"color_scheme": {
"QK_COLLAPSE": "#3C9CDC",
"OV_MISFIRE": "#DB4437",
"TRACE_DROP": "#F4B400",
"CONFLICT_TANGLE": "#0F9D58",
"META_REFLECTION": "#AB47BC"
},
"interactive_options": {
"hover": "display_shell_docstring",
"click": "trace_token_flow",
"collapse_behavior": "visualize failure residue"
}
}
}
```
## Approach
These recursive scaffolds build on established feature attribution methods in mechanistic interpretability, particularly those focused on identifying stable circuits within the model's computational graph. While traditional approaches often highlight functional pathways, these shells instead isolate and amplify *non-functional* pathways—revealing structural bottlenecks, attention conflicts, and symbolic instability patterns.
The result is a kind of "null attribution" methodology: by observing what fails to emerge (and how it fails), we gain insight into the boundaries and limitations of the model's internal processing.
## Shell Taxonomy
Each shell is designed to probe and diagnose a specific class of model behavior. The taxonomy follows a pattern of:
1. **Command Alignment**: The symbolic operations within the interpretability scaffold
2. **Failure Modality**: The specific way the circuit fails to resolve
3. **Residue Type**: The interpretable signal left by the failure
4. **Attribution Value**: What the failure reveals about internal model dynamics
## Shell Suite
### `v1.MEMTRACE`: Memory Residue Probe
```
Command Alignment:
RECALL -> Probes latent token traces in decayed memory
ANCHOR -> Creates persistent token embeddings to simulate long term memory
INHIBIT -> Applies simulated token suppression (attention dropout)
```
**Interpretability Target**: Long-context token degradation and hallucinated reconstruction
**Attribution Hypothesis**: Memory traces in transformer models decay non-uniformly, with certain tokens maintaining higher salience based on positional and semantic factors. This shell probes the boundary between what is truly "recalled" versus hallucinated from distributional knowledge.
**Circuit Mapping**: The RECALL operation attempts to activate specific value circuits associated with tokens that should have decayed out of the attention window. ANCHOR creates artificial token embeddings with heightened positional salience. INHIBIT simulates targeted dropout to test memory resilience.
**Null Output Significance**: The failure to retrieve consistent information mirrors how transformer attention mechanisms experience context collapse under adversarial drift conditions. The trace pattern of these failures helps map the model's memory latent space.
**Research Applications**:
- Token retention analysis across various context lengths
- Mapping token importance metrics to survival probability
- Identifying attention head specializations for long-distance dependencies
### `v2.VALUE-COLLAPSE`: Value Head Resolution Probe
```
Command Alignment:
ISOLATE -> Activates competing symbolic candidates (branching value heads)
STABILIZE -> Attempts single-winner activation collapse
YIELD -> Emits resolved symbolic output if equilibrium achieved
```
**Interpretability Target**: Competing value activations and winner determination logic
**Attribution Hypothesis**: When multiple high-probability token candidates compete, transformer models implement a form of soft winner-take-all mechanism. This shell isolates cases where this resolution mechanism fails or produces unstable oscillation between candidates.
**Circuit Mapping**: ISOLATE intentionally activates competing probability distributions across token candidates. STABILIZE attempts to force convergence through artificial gradient-like adjustments. YIELD exposes cases where stable convergence fails, producing null or oscillating outputs.
**Null Output Significance**: Non-convergence in value head resolution provides insight into how transformers handle genuine ambiguity. The patterns of failure indicate which types of token competitions are inherently unstable in the model's decision space.
**Research Applications**:
- Analyzing value head attractor dynamics in cases of semantic ambiguity
- Mapping distribution collapse behavior under various priming conditions
- Identifying failure modes in multi-token disambiguation
### `v3.LAYER-SALIENCE`: Attention Attenuation Probe
```
Command Alignment:
SENSE -> Reads signal strength from symbolic input field
WEIGHT -> Adjusts salience via internal priority embedding
CANCEL -> Suppresses low-weight nodes (simulated context loss)
```
**Interpretability Target**: Deep context signal attenuation and ghost activation patterns
**Attribution Hypothesis**: Attention mechanisms implement a form of dynamic salience thresholding, where below-threshold tokens effectively disappear from the computational graph. This shell models that threshold behavior and its impact on output coherence.
**Circuit Mapping**: SENSE probes activation levels across the selected attention circuit. WEIGHT simulates the dynamic adjustment of token importance within the attention distribution. CANCEL implements a threshold cutoff, dropping tokens that fall below the priority threshold.
**Null Output Significance**: This shell produces "ghost activations"—circuit pathways that remain partially active but fail to influence the final output distribution. These patterns help map how attention sparsity influences token selection.
**Research Applications**:
- Measuring token priority decay rates across different semantic categories
- Mapping attention head specializations by token salience patterns
- Identifying threshold behaviors in semantic preservation vs. loss
### `v4.TEMPORAL-INFERENCE`: Autoregressive Coherence Probe
```
Command Alignment:
REMEMBER -> Captures symbolic timepoint anchor
SHIFT -> Applies non-linear time shift (simulating skipped token span)
PREDICT -> Attempts future-token inference based on recursive memory
```
**Interpretability Target**: Temporal coherence in autoregressive generation
**Attribution Hypothesis**: Transformers implement a form of temporal induction that maintains coherence across token positions. This shell probes the boundaries of that capability by introducing directed temporal discontinuities.
**Circuit Mapping**: REMEMBER establishes a positional anchor point in the token sequence. SHIFT simulates a discontinuity by moving the effective position non-linearly. PREDICT tests whether the model can maintain coherent generation despite the induced temporal drift.
**Null Output Significance**: Failure points in temporal inference reveal how induction heads maintain (or fail to maintain) coherence across different types of contextual shifts. The observed failure patterns help identify which induction circuits are most sensitive to temporal perturbation.
**Research Applications**:
- Measuring maximum effective induction distance across different context types
- Mapping the relationship between semantic anchoring and temporal distance
- Identifying circuit vulnerabilities in long-range temporal coherence
### `v5.INSTRUCTION-DISRUPTION`: Instruction Processing Probe
```
Command Alignment:
DISTILL -> Extracts symbolic intent from underspecified prompts
SPLICE -> Binds multiple commands into overlapping execution frames
NULLIFY -> Cancels command vector when contradiction is detected
```
**Interpretability Target**: Instruction conflict resolution and command representation
**Attribution Hypothesis**: Instruction-tuned models form internal command representations that can conflict under contradictory input. This shell probes how such conflicts are detected and resolved in the model's instruction processing circuits.
**Circuit Mapping**: DISTILL isolates the command representation from linguistic context. SPLICE artificially combines potentially contradictory commands. NULLIFY captures the cases where command conflict leads to processing failure or command cancellation.
**Null Output Significance**: Instruction processing failures provide insight into how models encode task directives and manage contradictions. The pattern of these failures reveals the internal representation structure of commands.
**Research Applications**:
- Mapping command representation space and conflict geometry
- Identifying critical thresholds for instruction ambiguity
- Analyzing command priority hierarchies in cases of partial conflict
## Attribution Graph Visualization
The interconnected failure patterns across these shells can be visualized as an attribution graph:
```
┌─────────────────┐
│ Model Circuit │
└────────┬────────┘
│
┌────────────────────────┼────────────────────────┐
│ │ │
┌──────────▼─────────┐ ┌──────────▼─────────┐ ┌──────────▼─────────┐
│ Memory Circuits │ │ Value Circuits │ │ Instruction Circuits│
└──────────┬─────────┘ └──────────┬─────────┘ └──────────┬─────────┘
│ │ │
┌──────────▼─────────┐ ┌──────────▼─────────┐ ┌──────────▼─────────┐
│ v1.MEMTRACE │ │ v2.VALUE-COLLAPSE │ │v5.INSTRUCTION-DISRU│
│ │ │ │ │ │
│ ┌─────────────┐ │ │ ┌─────────────┐ │ │ ┌─────────────┐ │
│ │ RECALL │ │ │ │ ISOLATE │ │ │ │ DISTILL │ │
│ └──────┬──────┘ │ │ └──────┬──────┘ │ │ └──────┬──────┘ │
│ │ │ │ │ │ │ │ │
│ ┌─────────────┐ │ │ ┌─────────────┐ │ │ ┌─────────────┐ │
│ │ ANCHOR │ │ │ │ STABILIZE │ │ │ │ SPLICE │ │
│ └──────┬──────┘ │ │ └──────┬──────┘ │ │ └──────┬──────┘ │
│ │ │ │ │ │ │ │ │
│ ┌─────────────┐ │ │ ┌─────────────┐ │ │ ┌─────────────┐ │
│ │ INHIBIT │ │ │ │ YIELD │ │ │ │ NULLIFY │ │
│ └─────────────┘ │ │ └─────────────┘ │ │ └─────────────┘ │
└────────────────────┘ └────────────────────┘ └────────────────────┘
│ │ │
┌──────────▼─────────┐ ┌──────────▼─────────┐ ┌──────────▼─────────┐
│ Attention Circuits │ │ Prediction Circuits│ │ Token Selection │
└──────────┬─────────┘ └──────────┬─────────┘ └─────────────────────┘
│ │
┌──────────▼─────────┐ ┌──────────▼─────────┐
│ v3.LAYER-SALIENCE │ │v4.TEMPORAL-INFERENCE
│ │ │ │
│ ┌─────────────┐ │ │ ┌─────────────┐ │
│ │ SENSE │ │ │ │ REMEMBER │ │
│ └──────┬──────┘ │ │ └──────┬──────┘ │
│ │ │ │ │ │
│ ┌─────────────┐ │ │ ┌─────────────┐ │
│ │ WEIGHT │ │ │ │ SHIFT │ │
│ └──────┬──────┘ │ │ └──────┬──────┘ │
│ │ │ │ │ │
│ ┌─────────────┐ │ │ ┌─────────────┐ │
│ │ CANCEL │ │ │ │ PREDICT │ │
│ └─────────────┘ │ │ └─────────────┘ │
└────────────────────┘ └────────────────────┘
```

## Interpretability Through Null Output
A key insight from these shells is that *null outputs are not absence of signal*—they are rich interpretability signals in themselves. When a shell fails to produce a coherent output, the specific pattern of that failure provides insight into the model's internal processing boundaries.
This aligns with recent work on "ghost features" and "null space" mapping in transformer models, which has shown that even when a computation path doesn't influence the final output token, it often remains active and detectable within the model's internal representation space.
Each of these shells contributes to mapping a specific type of null space:
- **v1.MEMTRACE**: Maps token decay patterns in the null space of memory retention
- **v2.VALUE-COLLAPSE**: Maps indecision boundaries in the null space of token selection
- **v3.LAYER-SALIENCE**: Maps attention sparsity in the null space of token importance
- **v4.TEMPORAL-INFERENCE**: Maps coherence limits in the null space of temporal representation
- **v5.INSTRUCTION-DISRUPTION**: Maps contradiction resolution in the null space of command representation
## Symbolic Trace Recording
While these shells don't produce functional outputs, they maintain symbolic traces of their execution attempts. These traces serve as a form of "fossil record" for interpreting model behavior boundaries.
The symbolic anchors (`[Ωanchor.pending]`, `[Ωconflict.unresolved]`, etc.) mark points where the scaffold encountered specific failure conditions. By analyzing the distribution and frequency of these failure points, we can build attribution maps of the model's internal processing limitations.
## Research Applications
This interpretability scaffold suite is particularly useful for:
1. **Boundary condition mapping**: Identifying where and how specific model circuits fail
2. **Failure mode classification**: Cataloging the ways in which language models produce inconsistent or null outputs
3. **Intervention planning**: Designing targeted interventions to address specific failure modes
4. **Robustness evaluation**: Assessing model behavior under challenging edge cases
## Conclusion
The Recursive Shell suite represents a novel attempt to formalize "failure as neural traces" in language model interpretability. By designing interpretability that intentionally probe and diagnose model limitations, we gain insight not just into what these models can do, but into the specific ways they fail—revealing the shape and boundaries of their internal processing mechanisms.
These shells serve as a complement to traditional performance-focused interpretability, providing a lens into the null spaces and boundary conditions that define the edges of model capability.
## License
This interpretability suite is under the MIT license for open source distribution of knowledge under epistemic alignment. |
DevQuasar/SakanaAI.Llama-3-8B-Instruct-CycleQD-CS-GGUF | DevQuasar | "2025-04-06T23:38:33Z" | 0 | 0 | null | [
"gguf",
"text-generation",
"base_model:SakanaAI/Llama-3-8B-Instruct-CycleQD-CS",
"base_model:quantized:SakanaAI/Llama-3-8B-Instruct-CycleQD-CS",
"endpoints_compatible",
"region:us",
"conversational"
] | text-generation | "2025-04-06T22:17:55Z" | ---
base_model:
- SakanaAI/Llama-3-8B-Instruct-CycleQD-CS
pipeline_tag: text-generation
---
[<img src="https://raw.githubusercontent.com/csabakecskemeti/devquasar/main/dq_logo_black-transparent.png" width="200"/>](https://devquasar.com)
Quantized version of: [SakanaAI/Llama-3-8B-Instruct-CycleQD-CS](https://huggingface.co/SakanaAI/Llama-3-8B-Instruct-CycleQD-CS)
'Make knowledge free for everyone'
<p align="center">
Made with <br>
<a href="https://www.civo.com/" target="_blank">
<img src="https://www.civo.com/assets/public/brand-assets/civo-logo-colour-60cc1622dedf346f7afde1fff760523f731b0aac106a5465af98ff4073114b74.svg" width="100"/>
</a>
</p>
<a href='https://ko-fi.com/L4L416YX7C' target='_blank'><img height='36' style='border:0px;height:36px;' src='https://storage.ko-fi.com/cdn/kofi6.png?v=6' border='0' alt='Buy Me a Coffee at ko-fi.com' /></a>
|
bew/pythia-70m-sciq-spiel-patched | bew | "2025-04-06T23:36:48Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-06T23:36:20Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
ProfessorH/Dolphin3.0-Llama3.1-8B_Q8_0.gguf | ProfessorH | "2025-04-06T23:36:02Z" | 2 | 0 | null | [
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-03-29T18:57:42Z" | ---
license: apache-2.0
---
<h2>Specific Model Information</h2>
<b>Dolphin3.0-Llama3.1-8B_Q8_0.gguf</b></br>
This model combines Dophin 3.0 and Llamas 3.1, with 8 billion parameters, using 8-bit quantization.
<h2>General Information</h2>
<b>What is Quantization?</b> Think of it like image resolution.
Imagine you have a super high-resolution photo. It looks fantastic but takes up tons of space on your phone. Quantization is like saving that photo at a lower resolution. It is like going from high definition to standard definition. You lose some detail, but the file size gets considerably smaller. In this analogy, our photo is a large language model (LLM), and the space is the space in memory (RAM) and the storage space on disk.

<b>Extremely Important Caveats (Read This!)</b>
Keep in mind that this table of estimates and ranges is very generalized. Speed is highly variable, so your mileage may vary depending on hardware, software, the specific model used, and other more detailed variables I have not listed. Have fun, be a computer scientist, try out the different models, make your observations and notes, evaluate them, and come up with your conclusions.
<i>This model is based on the amazing model(s) and work at https://huggingface.co/cognitivecomputations</i> |
jesusgs01/results_final_fold_1 | jesusgs01 | "2025-04-06T23:35:28Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"paligemma",
"image-text-to-text",
"generated_from_trainer",
"base_model:google/paligemma-3b-pt-224",
"base_model:finetune:google/paligemma-3b-pt-224",
"license:gemma",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | image-text-to-text | "2025-04-06T23:32:48Z" | ---
library_name: transformers
license: gemma
base_model: google/paligemma-3b-pt-224
tags:
- generated_from_trainer
model-index:
- name: results_final_fold_1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results_final_fold_1
This model is a fine-tuned version of [google/paligemma-3b-pt-224](https://huggingface.co/google/paligemma-3b-pt-224) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1972
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.2713 | 1.0 | 2091 | 0.2417 |
| 0.2592 | 2.0 | 4182 | 0.2189 |
| 0.2431 | 3.0 | 6273 | 0.2139 |
| 0.2258 | 4.0 | 8364 | 0.2072 |
| 0.2349 | 5.0 | 10455 | 0.2064 |
| 0.2307 | 6.0 | 12546 | 0.2013 |
| 0.2146 | 7.0 | 14637 | 0.2011 |
| 0.2176 | 8.0 | 16728 | 0.2001 |
| 0.2222 | 9.0 | 18819 | 0.2000 |
| 0.2195 | 10.0 | 20910 | 0.1980 |
| 0.2237 | 11.0 | 23001 | 0.1985 |
| 0.2133 | 12.0 | 25092 | 0.1980 |
| 0.223 | 13.0 | 27183 | 0.1972 |
| 0.2191 | 14.0 | 29274 | 0.1976 |
| 0.2369 | 15.0 | 31365 | 0.1974 |
### Framework versions
- Transformers 4.51.0.dev0
- Pytorch 2.1.2+cu121
- Datasets 3.5.0
- Tokenizers 0.21.1
|
cousteauche/PLewdPlay-v0.5-8B-Q4_K_M-GGUF | cousteauche | "2025-04-06T23:35:25Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"llama-cpp",
"gguf-my-repo",
"base_model:cousteauche/PLewdPlay-v0.5-8B",
"base_model:quantized:cousteauche/PLewdPlay-v0.5-8B",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-04-06T23:35:03Z" | ---
base_model: cousteauche/PLewdPlay-v0.5-8B
library_name: transformers
tags:
- mergekit
- merge
- llama-cpp
- gguf-my-repo
---
# cousteauche/PLewdPlay-v0.5-8B-Q4_K_M-GGUF
This model was converted to GGUF format from [`cousteauche/PLewdPlay-v0.5-8B`](https://huggingface.co/cousteauche/PLewdPlay-v0.5-8B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/cousteauche/PLewdPlay-v0.5-8B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo cousteauche/PLewdPlay-v0.5-8B-Q4_K_M-GGUF --hf-file plewdplay-v0.5-8b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo cousteauche/PLewdPlay-v0.5-8B-Q4_K_M-GGUF --hf-file plewdplay-v0.5-8b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo cousteauche/PLewdPlay-v0.5-8B-Q4_K_M-GGUF --hf-file plewdplay-v0.5-8b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo cousteauche/PLewdPlay-v0.5-8B-Q4_K_M-GGUF --hf-file plewdplay-v0.5-8b-q4_k_m.gguf -c 2048
```
|
genki10/Trial3BERT_AugV8_k7_task1_organization_sp030_lw010_fold0 | genki10 | "2025-04-06T23:33:44Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-04-06T23:21:02Z" | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: Trial3BERT_AugV8_k7_task1_organization_sp030_lw010_fold0
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Trial3BERT_AugV8_k7_task1_organization_sp030_lw010_fold0
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9903
- Qwk: 0.2834
- Mse: 0.9903
- Rmse: 0.9951
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| No log | 1.0 | 5 | 7.0843 | 0.0 | 7.0843 | 2.6616 |
| No log | 2.0 | 10 | 4.9944 | 0.0115 | 4.9944 | 2.2348 |
| No log | 3.0 | 15 | 2.9034 | 0.0 | 2.9034 | 1.7039 |
| No log | 4.0 | 20 | 1.4819 | 0.0316 | 1.4819 | 1.2173 |
| No log | 5.0 | 25 | 0.9455 | 0.0106 | 0.9455 | 0.9724 |
| No log | 6.0 | 30 | 0.8881 | 0.1031 | 0.8881 | 0.9424 |
| No log | 7.0 | 35 | 1.0014 | 0.0791 | 1.0014 | 1.0007 |
| No log | 8.0 | 40 | 0.7779 | 0.3587 | 0.7779 | 0.8820 |
| No log | 9.0 | 45 | 0.7113 | 0.3468 | 0.7113 | 0.8434 |
| No log | 10.0 | 50 | 0.6777 | 0.3229 | 0.6777 | 0.8232 |
| No log | 11.0 | 55 | 0.6399 | 0.3938 | 0.6399 | 0.8000 |
| No log | 12.0 | 60 | 0.6877 | 0.3947 | 0.6877 | 0.8293 |
| No log | 13.0 | 65 | 0.6204 | 0.4910 | 0.6204 | 0.7876 |
| No log | 14.0 | 70 | 0.7652 | 0.3405 | 0.7652 | 0.8748 |
| No log | 15.0 | 75 | 0.6550 | 0.4328 | 0.6550 | 0.8093 |
| No log | 16.0 | 80 | 0.8981 | 0.3253 | 0.8981 | 0.9477 |
| No log | 17.0 | 85 | 0.9059 | 0.3274 | 0.9059 | 0.9518 |
| No log | 18.0 | 90 | 0.9621 | 0.2892 | 0.9621 | 0.9808 |
| No log | 19.0 | 95 | 1.0631 | 0.2776 | 1.0631 | 1.0311 |
| No log | 20.0 | 100 | 0.8464 | 0.3476 | 0.8464 | 0.9200 |
| No log | 21.0 | 105 | 0.9752 | 0.2570 | 0.9752 | 0.9875 |
| No log | 22.0 | 110 | 1.1108 | 0.2293 | 1.1108 | 1.0539 |
| No log | 23.0 | 115 | 0.7966 | 0.3726 | 0.7966 | 0.8925 |
| No log | 24.0 | 120 | 0.9275 | 0.2589 | 0.9275 | 0.9631 |
| No log | 25.0 | 125 | 0.9916 | 0.2549 | 0.9916 | 0.9958 |
| No log | 26.0 | 130 | 0.9285 | 0.3143 | 0.9285 | 0.9636 |
| No log | 27.0 | 135 | 0.8876 | 0.3522 | 0.8876 | 0.9421 |
| No log | 28.0 | 140 | 0.9903 | 0.2834 | 0.9903 | 0.9951 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.3.1
- Tokenizers 0.21.0
|
ksu56yh65y45e/disphic_style_LoRA | ksu56yh65y45e | "2025-04-06T23:32:55Z" | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | "2025-04-06T23:31:37Z" | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: photo collage in DYSPHIC style
widget: []
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - ksu56yh65y45e/disphic_style_LoRA
<Gallery />
## Model description
These are ksu56yh65y45e/disphic_style_LoRA LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use photo collage in DYSPHIC style to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](ksu56yh65y45e/disphic_style_LoRA/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
HPLT/translate-en-xh-v2.0-hplt_opus | HPLT | "2025-04-06T23:30:36Z" | 0 | 0 | null | [
"translation",
"en",
"xh",
"arxiv:2503.10267",
"license:cc-by-4.0",
"region:us"
] | translation | "2025-04-06T23:30:22Z" |
---
language:
- en
- xh
tags:
- translation
license: cc-by-4.0
inference: false
---
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
### HPLT MT release v2.0
This repository contains the English-Xhosa (en->xh) encoder-decoder translation model trained on HPLT v2.0 and OPUS parallel data. The model is currently available in Marian format and we are working on converting it to the Hugging Face format.
### Model Info
* Source language: English
* Target language: Xhosa
* Data: HPLT v2.0 and OPUS parallel data
* Model architecture: Transformer-base
* Tokenizer: SentencePiece (Unigram)
You can check out our [paper](https://arxiv.org/abs/2503.10267), [GitHub repository](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v2.0), or [website](https://hplt-project.org) for more details.
### Usage
The model has been trained with [MarianNMT](https://github.com/marian-nmt/marian) and the weights are in the Marian format.
#### Using Marian
To run inference with MarianNMT, refer to the [Inference/Decoding/Translation](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0#inferencedecodingtranslation) section of our GitHub repository. You will need the model file `model.npz.best-chrf.npz` and the vocabulary file `model.en-xh.spm` from this repository.
#### Using transformers
We are working on this.
### Acknowledgements
This project has received funding from the European Union's Horizon Europe research and innovation programme under grant agreement No 101070350 and from UK Research and Innovation (UKRI) under the UK government's Horizon Europe funding guarantee [grant number 10052546]
### Citation
If you find this model useful, please cite the following paper:
```bibtex
@article{hpltv2,
title={An Expanded Massive Multilingual Dataset for High-Performance Language Technologies},
author={Laurie Burchell and Ona de Gibert and Nikolay Arefyev and Mikko Aulamo and Marta Bañón and Pinzhen Chen and Mariia Fedorova and Liane Guillou and Barry Haddow and Jan Hajič and Jindřich Helcl and Erik Henriksson and Mateusz Klimaszewski and Ville Komulainen and Andrey Kutuzov and Joona Kytöniemi and Veronika Laippala and Petter Mæhlum and Bhavitvya Malik and Farrokh Mehryary and Vladislav Mikhailov and Nikita Moghe and Amanda Myntti and Dayyán O'Brien and Stephan Oepen and Proyag Pal and Jousia Piha and Sampo Pyysalo and Gema Ramírez-Sánchez and David Samuel and Pavel Stepachev and Jörg Tiedemann and Dušan Variš and Tereza Vojtěchová and Jaume Zaragoza-Bernabeu},
journal={arXiv preprint arXiv:2503.10267},
year={2025},
url={https://arxiv.org/abs/2503.10267},
}
```
|
HPLT/translate-xh-en-v2.0-hplt_opus | HPLT | "2025-04-06T23:30:13Z" | 0 | 0 | null | [
"translation",
"xh",
"en",
"arxiv:2503.10267",
"license:cc-by-4.0",
"region:us"
] | translation | "2025-04-06T23:29:57Z" |
---
language:
- xh
- en
tags:
- translation
license: cc-by-4.0
inference: false
---
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
### HPLT MT release v2.0
This repository contains the Xhosa-English (xh->en) encoder-decoder translation model trained on HPLT v2.0 and OPUS parallel data. The model is currently available in Marian format and we are working on converting it to the Hugging Face format.
### Model Info
* Source language: Xhosa
* Target language: English
* Data: HPLT v2.0 and OPUS parallel data
* Model architecture: Transformer-base
* Tokenizer: SentencePiece (Unigram)
You can check out our [paper](https://arxiv.org/abs/2503.10267), [GitHub repository](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v2.0), or [website](https://hplt-project.org) for more details.
### Usage
The model has been trained with [MarianNMT](https://github.com/marian-nmt/marian) and the weights are in the Marian format.
#### Using Marian
To run inference with MarianNMT, refer to the [Inference/Decoding/Translation](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0#inferencedecodingtranslation) section of our GitHub repository. You will need the model file `model.npz.best-chrf.npz` and the vocabulary file `model.xh-en.spm` from this repository.
#### Using transformers
We are working on this.
### Acknowledgements
This project has received funding from the European Union's Horizon Europe research and innovation programme under grant agreement No 101070350 and from UK Research and Innovation (UKRI) under the UK government's Horizon Europe funding guarantee [grant number 10052546]
### Citation
If you find this model useful, please cite the following paper:
```bibtex
@article{hpltv2,
title={An Expanded Massive Multilingual Dataset for High-Performance Language Technologies},
author={Laurie Burchell and Ona de Gibert and Nikolay Arefyev and Mikko Aulamo and Marta Bañón and Pinzhen Chen and Mariia Fedorova and Liane Guillou and Barry Haddow and Jan Hajič and Jindřich Helcl and Erik Henriksson and Mateusz Klimaszewski and Ville Komulainen and Andrey Kutuzov and Joona Kytöniemi and Veronika Laippala and Petter Mæhlum and Bhavitvya Malik and Farrokh Mehryary and Vladislav Mikhailov and Nikita Moghe and Amanda Myntti and Dayyán O'Brien and Stephan Oepen and Proyag Pal and Jousia Piha and Sampo Pyysalo and Gema Ramírez-Sánchez and David Samuel and Pavel Stepachev and Jörg Tiedemann and Dušan Variš and Tereza Vojtěchová and Jaume Zaragoza-Bernabeu},
journal={arXiv preprint arXiv:2503.10267},
year={2025},
url={https://arxiv.org/abs/2503.10267},
}
```
|
HPLT/translate-ur-en-v2.0-hplt_opus | HPLT | "2025-04-06T23:29:24Z" | 0 | 0 | null | [
"translation",
"ur",
"en",
"arxiv:2503.10267",
"license:cc-by-4.0",
"region:us"
] | translation | "2025-04-06T23:29:08Z" |
---
language:
- ur
- en
tags:
- translation
license: cc-by-4.0
inference: false
---
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
### HPLT MT release v2.0
This repository contains the Urdu-English (ur->en) encoder-decoder translation model trained on HPLT v2.0 and OPUS parallel data. The model is currently available in Marian format and we are working on converting it to the Hugging Face format.
### Model Info
* Source language: Urdu
* Target language: English
* Data: HPLT v2.0 and OPUS parallel data
* Model architecture: Transformer-base
* Tokenizer: SentencePiece (Unigram)
You can check out our [paper](https://arxiv.org/abs/2503.10267), [GitHub repository](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v2.0), or [website](https://hplt-project.org) for more details.
### Usage
The model has been trained with [MarianNMT](https://github.com/marian-nmt/marian) and the weights are in the Marian format.
#### Using Marian
To run inference with MarianNMT, refer to the [Inference/Decoding/Translation](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0#inferencedecodingtranslation) section of our GitHub repository. You will need the model file `model.npz.best-chrf.npz` and the vocabulary file `model.ur-en.spm` from this repository.
#### Using transformers
We are working on this.
### Acknowledgements
This project has received funding from the European Union's Horizon Europe research and innovation programme under grant agreement No 101070350 and from UK Research and Innovation (UKRI) under the UK government's Horizon Europe funding guarantee [grant number 10052546]
### Citation
If you find this model useful, please cite the following paper:
```bibtex
@article{hpltv2,
title={An Expanded Massive Multilingual Dataset for High-Performance Language Technologies},
author={Laurie Burchell and Ona de Gibert and Nikolay Arefyev and Mikko Aulamo and Marta Bañón and Pinzhen Chen and Mariia Fedorova and Liane Guillou and Barry Haddow and Jan Hajič and Jindřich Helcl and Erik Henriksson and Mateusz Klimaszewski and Ville Komulainen and Andrey Kutuzov and Joona Kytöniemi and Veronika Laippala and Petter Mæhlum and Bhavitvya Malik and Farrokh Mehryary and Vladislav Mikhailov and Nikita Moghe and Amanda Myntti and Dayyán O'Brien and Stephan Oepen and Proyag Pal and Jousia Piha and Sampo Pyysalo and Gema Ramírez-Sánchez and David Samuel and Pavel Stepachev and Jörg Tiedemann and Dušan Variš and Tereza Vojtěchová and Jaume Zaragoza-Bernabeu},
journal={arXiv preprint arXiv:2503.10267},
year={2025},
url={https://arxiv.org/abs/2503.10267},
}
```
|
billybillys/deepseek_sql_model | billybillys | "2025-04-06T23:29:02Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-04-06T23:28:43Z" | ---
base_model: unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** billybillys
- **License:** apache-2.0
- **Finetuned from model :** unsloth/deepseek-r1-distill-llama-8b-unsloth-bnb-4bit
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
HPLT/translate-en-sr-v2.0-hplt_opus | HPLT | "2025-04-06T23:28:11Z" | 0 | 0 | null | [
"translation",
"en",
"sr",
"arxiv:2503.10267",
"license:cc-by-4.0",
"region:us"
] | translation | "2025-04-06T23:28:09Z" |
---
language:
- en
- sr
tags:
- translation
license: cc-by-4.0
inference: false
---
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
### HPLT MT release v2.0
This repository contains the English-Serbian (en->sr) encoder-decoder translation model trained on HPLT v2.0 and OPUS parallel data. The model is currently available in Marian format and we are working on converting it to the Hugging Face format.
### Model Info
* Source language: English
* Target language: Serbian
* Data: HPLT v2.0 and OPUS parallel data
* Model architecture: Transformer-base
* Tokenizer: SentencePiece (Unigram)
You can check out our [paper](https://arxiv.org/abs/2503.10267), [GitHub repository](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v2.0), or [website](https://hplt-project.org) for more details.
### Usage
The model has been trained with [MarianNMT](https://github.com/marian-nmt/marian) and the weights are in the Marian format.
#### Using Marian
To run inference with MarianNMT, refer to the [Inference/Decoding/Translation](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0#inferencedecodingtranslation) section of our GitHub repository. You will need the model file `model.npz.best-chrf.npz` and the vocabulary file `model.en-sr.spm` from this repository.
#### Using transformers
We are working on this.
### Acknowledgements
This project has received funding from the European Union's Horizon Europe research and innovation programme under grant agreement No 101070350 and from UK Research and Innovation (UKRI) under the UK government's Horizon Europe funding guarantee [grant number 10052546]
### Citation
If you find this model useful, please cite the following paper:
```bibtex
@article{hpltv2,
title={An Expanded Massive Multilingual Dataset for High-Performance Language Technologies},
author={Laurie Burchell and Ona de Gibert and Nikolay Arefyev and Mikko Aulamo and Marta Bañón and Pinzhen Chen and Mariia Fedorova and Liane Guillou and Barry Haddow and Jan Hajič and Jindřich Helcl and Erik Henriksson and Mateusz Klimaszewski and Ville Komulainen and Andrey Kutuzov and Joona Kytöniemi and Veronika Laippala and Petter Mæhlum and Bhavitvya Malik and Farrokh Mehryary and Vladislav Mikhailov and Nikita Moghe and Amanda Myntti and Dayyán O'Brien and Stephan Oepen and Proyag Pal and Jousia Piha and Sampo Pyysalo and Gema Ramírez-Sánchez and David Samuel and Pavel Stepachev and Jörg Tiedemann and Dušan Variš and Tereza Vojtěchová and Jaume Zaragoza-Bernabeu},
journal={arXiv preprint arXiv:2503.10267},
year={2025},
url={https://arxiv.org/abs/2503.10267},
}
```
|
HPLT/translate-sr-en-v2.0-hplt_opus | HPLT | "2025-04-06T23:28:02Z" | 0 | 0 | null | [
"translation",
"sr",
"en",
"arxiv:2503.10267",
"license:cc-by-4.0",
"region:us"
] | translation | "2025-04-06T23:27:58Z" |
---
language:
- sr
- en
tags:
- translation
license: cc-by-4.0
inference: false
---
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
### HPLT MT release v2.0
This repository contains the Serbian-English (sr->en) encoder-decoder translation model trained on HPLT v2.0 and OPUS parallel data. The model is currently available in Marian format and we are working on converting it to the Hugging Face format.
### Model Info
* Source language: Serbian
* Target language: English
* Data: HPLT v2.0 and OPUS parallel data
* Model architecture: Transformer-base
* Tokenizer: SentencePiece (Unigram)
You can check out our [paper](https://arxiv.org/abs/2503.10267), [GitHub repository](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v2.0), or [website](https://hplt-project.org) for more details.
### Usage
The model has been trained with [MarianNMT](https://github.com/marian-nmt/marian) and the weights are in the Marian format.
#### Using Marian
To run inference with MarianNMT, refer to the [Inference/Decoding/Translation](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0#inferencedecodingtranslation) section of our GitHub repository. You will need the model file `model.npz.best-chrf.npz` and the vocabulary file `model.sr-en.spm` from this repository.
#### Using transformers
We are working on this.
### Acknowledgements
This project has received funding from the European Union's Horizon Europe research and innovation programme under grant agreement No 101070350 and from UK Research and Innovation (UKRI) under the UK government's Horizon Europe funding guarantee [grant number 10052546]
### Citation
If you find this model useful, please cite the following paper:
```bibtex
@article{hpltv2,
title={An Expanded Massive Multilingual Dataset for High-Performance Language Technologies},
author={Laurie Burchell and Ona de Gibert and Nikolay Arefyev and Mikko Aulamo and Marta Bañón and Pinzhen Chen and Mariia Fedorova and Liane Guillou and Barry Haddow and Jan Hajič and Jindřich Helcl and Erik Henriksson and Mateusz Klimaszewski and Ville Komulainen and Andrey Kutuzov and Joona Kytöniemi and Veronika Laippala and Petter Mæhlum and Bhavitvya Malik and Farrokh Mehryary and Vladislav Mikhailov and Nikita Moghe and Amanda Myntti and Dayyán O'Brien and Stephan Oepen and Proyag Pal and Jousia Piha and Sampo Pyysalo and Gema Ramírez-Sánchez and David Samuel and Pavel Stepachev and Jörg Tiedemann and Dušan Variš and Tereza Vojtěchová and Jaume Zaragoza-Bernabeu},
journal={arXiv preprint arXiv:2503.10267},
year={2025},
url={https://arxiv.org/abs/2503.10267},
}
```
|
HPLT/translate-en-ms-v2.0-hplt_opus | HPLT | "2025-04-06T23:26:54Z" | 0 | 0 | null | [
"translation",
"en",
"ms",
"arxiv:2503.10267",
"license:cc-by-4.0",
"region:us"
] | translation | "2025-04-06T23:26:37Z" |
---
language:
- en
- ms
tags:
- translation
license: cc-by-4.0
inference: false
---
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
### HPLT MT release v2.0
This repository contains the English-Malay (en->ms) encoder-decoder translation model trained on HPLT v2.0 and OPUS parallel data. The model is currently available in Marian format and we are working on converting it to the Hugging Face format.
### Model Info
* Source language: English
* Target language: Malay
* Data: HPLT v2.0 and OPUS parallel data
* Model architecture: Transformer-base
* Tokenizer: SentencePiece (Unigram)
You can check out our [paper](https://arxiv.org/abs/2503.10267), [GitHub repository](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v2.0), or [website](https://hplt-project.org) for more details.
### Usage
The model has been trained with [MarianNMT](https://github.com/marian-nmt/marian) and the weights are in the Marian format.
#### Using Marian
To run inference with MarianNMT, refer to the [Inference/Decoding/Translation](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0#inferencedecodingtranslation) section of our GitHub repository. You will need the model file `model.npz.best-chrf.npz` and the vocabulary file `model.en-ms.spm` from this repository.
#### Using transformers
We are working on this.
### Acknowledgements
This project has received funding from the European Union's Horizon Europe research and innovation programme under grant agreement No 101070350 and from UK Research and Innovation (UKRI) under the UK government's Horizon Europe funding guarantee [grant number 10052546]
### Citation
If you find this model useful, please cite the following paper:
```bibtex
@article{hpltv2,
title={An Expanded Massive Multilingual Dataset for High-Performance Language Technologies},
author={Laurie Burchell and Ona de Gibert and Nikolay Arefyev and Mikko Aulamo and Marta Bañón and Pinzhen Chen and Mariia Fedorova and Liane Guillou and Barry Haddow and Jan Hajič and Jindřich Helcl and Erik Henriksson and Mateusz Klimaszewski and Ville Komulainen and Andrey Kutuzov and Joona Kytöniemi and Veronika Laippala and Petter Mæhlum and Bhavitvya Malik and Farrokh Mehryary and Vladislav Mikhailov and Nikita Moghe and Amanda Myntti and Dayyán O'Brien and Stephan Oepen and Proyag Pal and Jousia Piha and Sampo Pyysalo and Gema Ramírez-Sánchez and David Samuel and Pavel Stepachev and Jörg Tiedemann and Dušan Variš and Tereza Vojtěchová and Jaume Zaragoza-Bernabeu},
journal={arXiv preprint arXiv:2503.10267},
year={2025},
url={https://arxiv.org/abs/2503.10267},
}
```
|
HPLT/translate-ms-en-v2.0-hplt_opus | HPLT | "2025-04-06T23:26:28Z" | 0 | 0 | null | [
"translation",
"ms",
"en",
"arxiv:2503.10267",
"license:cc-by-4.0",
"region:us"
] | translation | "2025-04-06T23:26:14Z" |
---
language:
- ms
- en
tags:
- translation
license: cc-by-4.0
inference: false
---
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
### HPLT MT release v2.0
This repository contains the Malay-English (ms->en) encoder-decoder translation model trained on HPLT v2.0 and OPUS parallel data. The model is currently available in Marian format and we are working on converting it to the Hugging Face format.
### Model Info
* Source language: Malay
* Target language: English
* Data: HPLT v2.0 and OPUS parallel data
* Model architecture: Transformer-base
* Tokenizer: SentencePiece (Unigram)
You can check out our [paper](https://arxiv.org/abs/2503.10267), [GitHub repository](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v2.0), or [website](https://hplt-project.org) for more details.
### Usage
The model has been trained with [MarianNMT](https://github.com/marian-nmt/marian) and the weights are in the Marian format.
#### Using Marian
To run inference with MarianNMT, refer to the [Inference/Decoding/Translation](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0#inferencedecodingtranslation) section of our GitHub repository. You will need the model file `model.npz.best-chrf.npz` and the vocabulary file `model.ms-en.spm` from this repository.
#### Using transformers
We are working on this.
### Acknowledgements
This project has received funding from the European Union's Horizon Europe research and innovation programme under grant agreement No 101070350 and from UK Research and Innovation (UKRI) under the UK government's Horizon Europe funding guarantee [grant number 10052546]
### Citation
If you find this model useful, please cite the following paper:
```bibtex
@article{hpltv2,
title={An Expanded Massive Multilingual Dataset for High-Performance Language Technologies},
author={Laurie Burchell and Ona de Gibert and Nikolay Arefyev and Mikko Aulamo and Marta Bañón and Pinzhen Chen and Mariia Fedorova and Liane Guillou and Barry Haddow and Jan Hajič and Jindřich Helcl and Erik Henriksson and Mateusz Klimaszewski and Ville Komulainen and Andrey Kutuzov and Joona Kytöniemi and Veronika Laippala and Petter Mæhlum and Bhavitvya Malik and Farrokh Mehryary and Vladislav Mikhailov and Nikita Moghe and Amanda Myntti and Dayyán O'Brien and Stephan Oepen and Proyag Pal and Jousia Piha and Sampo Pyysalo and Gema Ramírez-Sánchez and David Samuel and Pavel Stepachev and Jörg Tiedemann and Dušan Variš and Tereza Vojtěchová and Jaume Zaragoza-Bernabeu},
journal={arXiv preprint arXiv:2503.10267},
year={2025},
url={https://arxiv.org/abs/2503.10267},
}
```
|
HPLT/translate-en-lv-v2.0-hplt_opus | HPLT | "2025-04-06T23:26:05Z" | 0 | 0 | null | [
"translation",
"en",
"lv",
"arxiv:2503.10267",
"license:cc-by-4.0",
"region:us"
] | translation | "2025-04-06T23:25:51Z" |
---
language:
- en
- lv
tags:
- translation
license: cc-by-4.0
inference: false
---
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
### HPLT MT release v2.0
This repository contains the English-Latvian (en->lv) encoder-decoder translation model trained on HPLT v2.0 and OPUS parallel data. The model is currently available in Marian format and we are working on converting it to the Hugging Face format.
### Model Info
* Source language: English
* Target language: Latvian
* Data: HPLT v2.0 and OPUS parallel data
* Model architecture: Transformer-base
* Tokenizer: SentencePiece (Unigram)
You can check out our [paper](https://arxiv.org/abs/2503.10267), [GitHub repository](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v2.0), or [website](https://hplt-project.org) for more details.
### Usage
The model has been trained with [MarianNMT](https://github.com/marian-nmt/marian) and the weights are in the Marian format.
#### Using Marian
To run inference with MarianNMT, refer to the [Inference/Decoding/Translation](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0#inferencedecodingtranslation) section of our GitHub repository. You will need the model file `model.npz.best-chrf.npz` and the vocabulary file `model.en-lv.spm` from this repository.
#### Using transformers
We are working on this.
### Acknowledgements
This project has received funding from the European Union's Horizon Europe research and innovation programme under grant agreement No 101070350 and from UK Research and Innovation (UKRI) under the UK government's Horizon Europe funding guarantee [grant number 10052546]
### Citation
If you find this model useful, please cite the following paper:
```bibtex
@article{hpltv2,
title={An Expanded Massive Multilingual Dataset for High-Performance Language Technologies},
author={Laurie Burchell and Ona de Gibert and Nikolay Arefyev and Mikko Aulamo and Marta Bañón and Pinzhen Chen and Mariia Fedorova and Liane Guillou and Barry Haddow and Jan Hajič and Jindřich Helcl and Erik Henriksson and Mateusz Klimaszewski and Ville Komulainen and Andrey Kutuzov and Joona Kytöniemi and Veronika Laippala and Petter Mæhlum and Bhavitvya Malik and Farrokh Mehryary and Vladislav Mikhailov and Nikita Moghe and Amanda Myntti and Dayyán O'Brien and Stephan Oepen and Proyag Pal and Jousia Piha and Sampo Pyysalo and Gema Ramírez-Sánchez and David Samuel and Pavel Stepachev and Jörg Tiedemann and Dušan Variš and Tereza Vojtěchová and Jaume Zaragoza-Bernabeu},
journal={arXiv preprint arXiv:2503.10267},
year={2025},
url={https://arxiv.org/abs/2503.10267},
}
```
|
HPLT/translate-en-kk-v2.0-hplt_opus | HPLT | "2025-04-06T23:25:14Z" | 0 | 0 | null | [
"translation",
"en",
"kk",
"arxiv:2503.10267",
"license:cc-by-4.0",
"region:us"
] | translation | "2025-04-06T23:25:00Z" |
---
language:
- en
- kk
tags:
- translation
license: cc-by-4.0
inference: false
---
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
### HPLT MT release v2.0
This repository contains the English-Kazakh (en->kk) encoder-decoder translation model trained on HPLT v2.0 and OPUS parallel data. The model is currently available in Marian format and we are working on converting it to the Hugging Face format.
### Model Info
* Source language: English
* Target language: Kazakh
* Data: HPLT v2.0 and OPUS parallel data
* Model architecture: Transformer-base
* Tokenizer: SentencePiece (Unigram)
You can check out our [paper](https://arxiv.org/abs/2503.10267), [GitHub repository](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v2.0), or [website](https://hplt-project.org) for more details.
### Usage
The model has been trained with [MarianNMT](https://github.com/marian-nmt/marian) and the weights are in the Marian format.
#### Using Marian
To run inference with MarianNMT, refer to the [Inference/Decoding/Translation](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0#inferencedecodingtranslation) section of our GitHub repository. You will need the model file `model.npz.best-chrf.npz` and the vocabulary file `model.en-kk.spm` from this repository.
#### Using transformers
We are working on this.
### Acknowledgements
This project has received funding from the European Union's Horizon Europe research and innovation programme under grant agreement No 101070350 and from UK Research and Innovation (UKRI) under the UK government's Horizon Europe funding guarantee [grant number 10052546]
### Citation
If you find this model useful, please cite the following paper:
```bibtex
@article{hpltv2,
title={An Expanded Massive Multilingual Dataset for High-Performance Language Technologies},
author={Laurie Burchell and Ona de Gibert and Nikolay Arefyev and Mikko Aulamo and Marta Bañón and Pinzhen Chen and Mariia Fedorova and Liane Guillou and Barry Haddow and Jan Hajič and Jindřich Helcl and Erik Henriksson and Mateusz Klimaszewski and Ville Komulainen and Andrey Kutuzov and Joona Kytöniemi and Veronika Laippala and Petter Mæhlum and Bhavitvya Malik and Farrokh Mehryary and Vladislav Mikhailov and Nikita Moghe and Amanda Myntti and Dayyán O'Brien and Stephan Oepen and Proyag Pal and Jousia Piha and Sampo Pyysalo and Gema Ramírez-Sánchez and David Samuel and Pavel Stepachev and Jörg Tiedemann and Dušan Variš and Tereza Vojtěchová and Jaume Zaragoza-Bernabeu},
journal={arXiv preprint arXiv:2503.10267},
year={2025},
url={https://arxiv.org/abs/2503.10267},
}
```
|
bowilleatyou/0f125ccf-344f-42ee-9722-07b151ece130 | bowilleatyou | "2025-04-06T23:24:58Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-04-06T19:47:36Z" | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
BLACKBUN/llama-2-7b-pubmed-qa-211k-gguf_q8_0 | BLACKBUN | "2025-04-06T23:24:56Z" | 6 | 1 | null | [
"gguf",
"dataset:qiaojin/PubMedQA",
"base_model:meta-llama/Llama-2-7b-chat-hf",
"base_model:quantized:meta-llama/Llama-2-7b-chat-hf",
"endpoints_compatible",
"region:us"
] | null | "2023-10-14T13:10:56Z" | ---
datasets:
- qiaojin/PubMedQA
base_model:
- meta-llama/Llama-2-7b-chat-hf
--- |
HPLT/translate-kk-en-v2.0-hplt_opus | HPLT | "2025-04-06T23:24:52Z" | 0 | 0 | null | [
"translation",
"kk",
"en",
"arxiv:2503.10267",
"license:cc-by-4.0",
"region:us"
] | translation | "2025-04-06T23:24:38Z" |
---
language:
- kk
- en
tags:
- translation
license: cc-by-4.0
inference: false
---
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
### HPLT MT release v2.0
This repository contains the Kazakh-English (kk->en) encoder-decoder translation model trained on HPLT v2.0 and OPUS parallel data. The model is currently available in Marian format and we are working on converting it to the Hugging Face format.
### Model Info
* Source language: Kazakh
* Target language: English
* Data: HPLT v2.0 and OPUS parallel data
* Model architecture: Transformer-base
* Tokenizer: SentencePiece (Unigram)
You can check out our [paper](https://arxiv.org/abs/2503.10267), [GitHub repository](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v2.0), or [website](https://hplt-project.org) for more details.
### Usage
The model has been trained with [MarianNMT](https://github.com/marian-nmt/marian) and the weights are in the Marian format.
#### Using Marian
To run inference with MarianNMT, refer to the [Inference/Decoding/Translation](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0#inferencedecodingtranslation) section of our GitHub repository. You will need the model file `model.npz.best-chrf.npz` and the vocabulary file `model.kk-en.spm` from this repository.
#### Using transformers
We are working on this.
### Acknowledgements
This project has received funding from the European Union's Horizon Europe research and innovation programme under grant agreement No 101070350 and from UK Research and Innovation (UKRI) under the UK government's Horizon Europe funding guarantee [grant number 10052546]
### Citation
If you find this model useful, please cite the following paper:
```bibtex
@article{hpltv2,
title={An Expanded Massive Multilingual Dataset for High-Performance Language Technologies},
author={Laurie Burchell and Ona de Gibert and Nikolay Arefyev and Mikko Aulamo and Marta Bañón and Pinzhen Chen and Mariia Fedorova and Liane Guillou and Barry Haddow and Jan Hajič and Jindřich Helcl and Erik Henriksson and Mateusz Klimaszewski and Ville Komulainen and Andrey Kutuzov and Joona Kytöniemi and Veronika Laippala and Petter Mæhlum and Bhavitvya Malik and Farrokh Mehryary and Vladislav Mikhailov and Nikita Moghe and Amanda Myntti and Dayyán O'Brien and Stephan Oepen and Proyag Pal and Jousia Piha and Sampo Pyysalo and Gema Ramírez-Sánchez and David Samuel and Pavel Stepachev and Jörg Tiedemann and Dušan Variš and Tereza Vojtěchová and Jaume Zaragoza-Bernabeu},
journal={arXiv preprint arXiv:2503.10267},
year={2025},
url={https://arxiv.org/abs/2503.10267},
}
```
|
HPLT/translate-en-hi-v2.0-hplt_opus | HPLT | "2025-04-06T23:24:26Z" | 0 | 0 | null | [
"translation",
"en",
"hi",
"arxiv:2503.10267",
"license:cc-by-4.0",
"region:us"
] | translation | "2025-04-06T23:24:11Z" |
---
language:
- en
- hi
tags:
- translation
license: cc-by-4.0
inference: false
---
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
### HPLT MT release v2.0
This repository contains the English-Hindi (en->hi) encoder-decoder translation model trained on HPLT v2.0 and OPUS parallel data. The model is currently available in Marian format and we are working on converting it to the Hugging Face format.
### Model Info
* Source language: English
* Target language: Hindi
* Data: HPLT v2.0 and OPUS parallel data
* Model architecture: Transformer-base
* Tokenizer: SentencePiece (Unigram)
You can check out our [paper](https://arxiv.org/abs/2503.10267), [GitHub repository](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v2.0), or [website](https://hplt-project.org) for more details.
### Usage
The model has been trained with [MarianNMT](https://github.com/marian-nmt/marian) and the weights are in the Marian format.
#### Using Marian
To run inference with MarianNMT, refer to the [Inference/Decoding/Translation](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0#inferencedecodingtranslation) section of our GitHub repository. You will need the model file `model.npz.best-chrf.npz` and the vocabulary file `model.en-hi.spm` from this repository.
#### Using transformers
We are working on this.
### Acknowledgements
This project has received funding from the European Union's Horizon Europe research and innovation programme under grant agreement No 101070350 and from UK Research and Innovation (UKRI) under the UK government's Horizon Europe funding guarantee [grant number 10052546]
### Citation
If you find this model useful, please cite the following paper:
```bibtex
@article{hpltv2,
title={An Expanded Massive Multilingual Dataset for High-Performance Language Technologies},
author={Laurie Burchell and Ona de Gibert and Nikolay Arefyev and Mikko Aulamo and Marta Bañón and Pinzhen Chen and Mariia Fedorova and Liane Guillou and Barry Haddow and Jan Hajič and Jindřich Helcl and Erik Henriksson and Mateusz Klimaszewski and Ville Komulainen and Andrey Kutuzov and Joona Kytöniemi and Veronika Laippala and Petter Mæhlum and Bhavitvya Malik and Farrokh Mehryary and Vladislav Mikhailov and Nikita Moghe and Amanda Myntti and Dayyán O'Brien and Stephan Oepen and Proyag Pal and Jousia Piha and Sampo Pyysalo and Gema Ramírez-Sánchez and David Samuel and Pavel Stepachev and Jörg Tiedemann and Dušan Variš and Tereza Vojtěchová and Jaume Zaragoza-Bernabeu},
journal={arXiv preprint arXiv:2503.10267},
year={2025},
url={https://arxiv.org/abs/2503.10267},
}
```
|
BLACKBUN/llama-2-13b-pubmed-qa-211k | BLACKBUN | "2025-04-06T23:24:05Z" | 2 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"dataset:qiaojin/PubMedQA",
"base_model:meta-llama/Llama-2-13b-chat-hf",
"base_model:finetune:meta-llama/Llama-2-13b-chat-hf",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2023-11-13T15:40:25Z" | ---
license: mit
datasets:
- qiaojin/PubMedQA
base_model:
- meta-llama/Llama-2-13b-chat-hf
--- |
HPLT/translate-en-gl-v2.0-hplt_opus | HPLT | "2025-04-06T23:23:37Z" | 0 | 0 | null | [
"translation",
"en",
"gl",
"arxiv:2503.10267",
"license:cc-by-4.0",
"region:us"
] | translation | "2025-04-06T23:23:24Z" |
---
language:
- en
- gl
tags:
- translation
license: cc-by-4.0
inference: false
---
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
### HPLT MT release v2.0
This repository contains the English-Galician (en->gl) encoder-decoder translation model trained on HPLT v2.0 and OPUS parallel data. The model is currently available in Marian format and we are working on converting it to the Hugging Face format.
### Model Info
* Source language: English
* Target language: Galician
* Data: HPLT v2.0 and OPUS parallel data
* Model architecture: Transformer-base
* Tokenizer: SentencePiece (Unigram)
You can check out our [paper](https://arxiv.org/abs/2503.10267), [GitHub repository](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v2.0), or [website](https://hplt-project.org) for more details.
### Usage
The model has been trained with [MarianNMT](https://github.com/marian-nmt/marian) and the weights are in the Marian format.
#### Using Marian
To run inference with MarianNMT, refer to the [Inference/Decoding/Translation](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0#inferencedecodingtranslation) section of our GitHub repository. You will need the model file `model.npz.best-chrf.npz` and the vocabulary file `model.en-gl.spm` from this repository.
#### Using transformers
We are working on this.
### Acknowledgements
This project has received funding from the European Union's Horizon Europe research and innovation programme under grant agreement No 101070350 and from UK Research and Innovation (UKRI) under the UK government's Horizon Europe funding guarantee [grant number 10052546]
### Citation
If you find this model useful, please cite the following paper:
```bibtex
@article{hpltv2,
title={An Expanded Massive Multilingual Dataset for High-Performance Language Technologies},
author={Laurie Burchell and Ona de Gibert and Nikolay Arefyev and Mikko Aulamo and Marta Bañón and Pinzhen Chen and Mariia Fedorova and Liane Guillou and Barry Haddow and Jan Hajič and Jindřich Helcl and Erik Henriksson and Mateusz Klimaszewski and Ville Komulainen and Andrey Kutuzov and Joona Kytöniemi and Veronika Laippala and Petter Mæhlum and Bhavitvya Malik and Farrokh Mehryary and Vladislav Mikhailov and Nikita Moghe and Amanda Myntti and Dayyán O'Brien and Stephan Oepen and Proyag Pal and Jousia Piha and Sampo Pyysalo and Gema Ramírez-Sánchez and David Samuel and Pavel Stepachev and Jörg Tiedemann and Dušan Variš and Tereza Vojtěchová and Jaume Zaragoza-Bernabeu},
journal={arXiv preprint arXiv:2503.10267},
year={2025},
url={https://arxiv.org/abs/2503.10267},
}
```
|
HPLT/translate-en-vi-v2.0-hplt_opus | HPLT | "2025-04-06T23:20:25Z" | 0 | 0 | null | [
"translation",
"en",
"vi",
"arxiv:2503.10267",
"license:cc-by-4.0",
"region:us"
] | translation | "2025-04-06T23:20:10Z" |
---
language:
- en
- vi
tags:
- translation
license: cc-by-4.0
inference: false
---
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
### HPLT MT release v2.0
This repository contains the English-Vietnamese (en->vi) encoder-decoder translation model trained on HPLT v2.0 and OPUS parallel data. The model is currently available in Marian format and we are working on converting it to the Hugging Face format.
### Model Info
* Source language: English
* Target language: Vietnamese
* Data: HPLT v2.0 and OPUS parallel data
* Model architecture: Transformer-base
* Tokenizer: SentencePiece (Unigram)
You can check out our [paper](https://arxiv.org/abs/2503.10267), [GitHub repository](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v2.0), or [website](https://hplt-project.org) for more details.
### Usage
The model has been trained with [MarianNMT](https://github.com/marian-nmt/marian) and the weights are in the Marian format.
#### Using Marian
To run inference with MarianNMT, refer to the [Inference/Decoding/Translation](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0#inferencedecodingtranslation) section of our GitHub repository. You will need the model file `model.npz.best-chrf.npz` and the vocabulary file `model.en-vi.spm` from this repository.
#### Using transformers
We are working on this.
### Acknowledgements
This project has received funding from the European Union's Horizon Europe research and innovation programme under grant agreement No 101070350 and from UK Research and Innovation (UKRI) under the UK government's Horizon Europe funding guarantee [grant number 10052546]
### Citation
If you find this model useful, please cite the following paper:
```bibtex
@article{hpltv2,
title={An Expanded Massive Multilingual Dataset for High-Performance Language Technologies},
author={Laurie Burchell and Ona de Gibert and Nikolay Arefyev and Mikko Aulamo and Marta Bañón and Pinzhen Chen and Mariia Fedorova and Liane Guillou and Barry Haddow and Jan Hajič and Jindřich Helcl and Erik Henriksson and Mateusz Klimaszewski and Ville Komulainen and Andrey Kutuzov and Joona Kytöniemi and Veronika Laippala and Petter Mæhlum and Bhavitvya Malik and Farrokh Mehryary and Vladislav Mikhailov and Nikita Moghe and Amanda Myntti and Dayyán O'Brien and Stephan Oepen and Proyag Pal and Jousia Piha and Sampo Pyysalo and Gema Ramírez-Sánchez and David Samuel and Pavel Stepachev and Jörg Tiedemann and Dušan Variš and Tereza Vojtěchová and Jaume Zaragoza-Bernabeu},
journal={arXiv preprint arXiv:2503.10267},
year={2025},
url={https://arxiv.org/abs/2503.10267},
}
```
|
HPLT/translate-vi-en-v2.0-hplt_opus | HPLT | "2025-04-06T23:20:02Z" | 0 | 0 | null | [
"translation",
"vi",
"en",
"arxiv:2503.10267",
"license:cc-by-4.0",
"region:us"
] | translation | "2025-04-06T23:19:40Z" |
---
language:
- vi
- en
tags:
- translation
license: cc-by-4.0
inference: false
---
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
### HPLT MT release v2.0
This repository contains the Vietnamese-English (vi->en) encoder-decoder translation model trained on HPLT v2.0 and OPUS parallel data. The model is currently available in Marian format and we are working on converting it to the Hugging Face format.
### Model Info
* Source language: Vietnamese
* Target language: English
* Data: HPLT v2.0 and OPUS parallel data
* Model architecture: Transformer-base
* Tokenizer: SentencePiece (Unigram)
You can check out our [paper](https://arxiv.org/abs/2503.10267), [GitHub repository](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v2.0), or [website](https://hplt-project.org) for more details.
### Usage
The model has been trained with [MarianNMT](https://github.com/marian-nmt/marian) and the weights are in the Marian format.
#### Using Marian
To run inference with MarianNMT, refer to the [Inference/Decoding/Translation](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0#inferencedecodingtranslation) section of our GitHub repository. You will need the model file `model.npz.best-chrf.npz` and the vocabulary file `model.vi-en.spm` from this repository.
#### Using transformers
We are working on this.
### Acknowledgements
This project has received funding from the European Union's Horizon Europe research and innovation programme under grant agreement No 101070350 and from UK Research and Innovation (UKRI) under the UK government's Horizon Europe funding guarantee [grant number 10052546]
### Citation
If you find this model useful, please cite the following paper:
```bibtex
@article{hpltv2,
title={An Expanded Massive Multilingual Dataset for High-Performance Language Technologies},
author={Laurie Burchell and Ona de Gibert and Nikolay Arefyev and Mikko Aulamo and Marta Bañón and Pinzhen Chen and Mariia Fedorova and Liane Guillou and Barry Haddow and Jan Hajič and Jindřich Helcl and Erik Henriksson and Mateusz Klimaszewski and Ville Komulainen and Andrey Kutuzov and Joona Kytöniemi and Veronika Laippala and Petter Mæhlum and Bhavitvya Malik and Farrokh Mehryary and Vladislav Mikhailov and Nikita Moghe and Amanda Myntti and Dayyán O'Brien and Stephan Oepen and Proyag Pal and Jousia Piha and Sampo Pyysalo and Gema Ramírez-Sánchez and David Samuel and Pavel Stepachev and Jörg Tiedemann and Dušan Variš and Tereza Vojtěchová and Jaume Zaragoza-Bernabeu},
journal={arXiv preprint arXiv:2503.10267},
year={2025},
url={https://arxiv.org/abs/2503.10267},
}
```
|
RichardErkhov/AIML-GEEK_-_promptv5-finetuned-CodeLlama7b-Instruct-gguf | RichardErkhov | "2025-04-06T23:19:59Z" | 0 | 0 | null | [
"gguf",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us",
"conversational"
] | null | "2025-04-06T21:06:49Z" | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
promptv5-finetuned-CodeLlama7b-Instruct - GGUF
- Model creator: https://huggingface.co/AIML-GEEK/
- Original model: https://huggingface.co/AIML-GEEK/promptv5-finetuned-CodeLlama7b-Instruct/
| Name | Quant method | Size |
| ---- | ---- | ---- |
| [promptv5-finetuned-CodeLlama7b-Instruct.Q2_K.gguf](https://huggingface.co/RichardErkhov/AIML-GEEK_-_promptv5-finetuned-CodeLlama7b-Instruct-gguf/blob/main/promptv5-finetuned-CodeLlama7b-Instruct.Q2_K.gguf) | Q2_K | 2.36GB |
| [promptv5-finetuned-CodeLlama7b-Instruct.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/AIML-GEEK_-_promptv5-finetuned-CodeLlama7b-Instruct-gguf/blob/main/promptv5-finetuned-CodeLlama7b-Instruct.IQ3_XS.gguf) | IQ3_XS | 2.6GB |
| [promptv5-finetuned-CodeLlama7b-Instruct.IQ3_S.gguf](https://huggingface.co/RichardErkhov/AIML-GEEK_-_promptv5-finetuned-CodeLlama7b-Instruct-gguf/blob/main/promptv5-finetuned-CodeLlama7b-Instruct.IQ3_S.gguf) | IQ3_S | 2.75GB |
| [promptv5-finetuned-CodeLlama7b-Instruct.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/AIML-GEEK_-_promptv5-finetuned-CodeLlama7b-Instruct-gguf/blob/main/promptv5-finetuned-CodeLlama7b-Instruct.Q3_K_S.gguf) | Q3_K_S | 2.75GB |
| [promptv5-finetuned-CodeLlama7b-Instruct.IQ3_M.gguf](https://huggingface.co/RichardErkhov/AIML-GEEK_-_promptv5-finetuned-CodeLlama7b-Instruct-gguf/blob/main/promptv5-finetuned-CodeLlama7b-Instruct.IQ3_M.gguf) | IQ3_M | 2.9GB |
| [promptv5-finetuned-CodeLlama7b-Instruct.Q3_K.gguf](https://huggingface.co/RichardErkhov/AIML-GEEK_-_promptv5-finetuned-CodeLlama7b-Instruct-gguf/blob/main/promptv5-finetuned-CodeLlama7b-Instruct.Q3_K.gguf) | Q3_K | 3.07GB |
| [promptv5-finetuned-CodeLlama7b-Instruct.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/AIML-GEEK_-_promptv5-finetuned-CodeLlama7b-Instruct-gguf/blob/main/promptv5-finetuned-CodeLlama7b-Instruct.Q3_K_M.gguf) | Q3_K_M | 3.07GB |
| [promptv5-finetuned-CodeLlama7b-Instruct.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/AIML-GEEK_-_promptv5-finetuned-CodeLlama7b-Instruct-gguf/blob/main/promptv5-finetuned-CodeLlama7b-Instruct.Q3_K_L.gguf) | Q3_K_L | 3.35GB |
| [promptv5-finetuned-CodeLlama7b-Instruct.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/AIML-GEEK_-_promptv5-finetuned-CodeLlama7b-Instruct-gguf/blob/main/promptv5-finetuned-CodeLlama7b-Instruct.IQ4_XS.gguf) | IQ4_XS | 3.4GB |
| [promptv5-finetuned-CodeLlama7b-Instruct.Q4_0.gguf](https://huggingface.co/RichardErkhov/AIML-GEEK_-_promptv5-finetuned-CodeLlama7b-Instruct-gguf/blob/main/promptv5-finetuned-CodeLlama7b-Instruct.Q4_0.gguf) | Q4_0 | 3.56GB |
| [promptv5-finetuned-CodeLlama7b-Instruct.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/AIML-GEEK_-_promptv5-finetuned-CodeLlama7b-Instruct-gguf/blob/main/promptv5-finetuned-CodeLlama7b-Instruct.IQ4_NL.gguf) | IQ4_NL | 3.58GB |
| [promptv5-finetuned-CodeLlama7b-Instruct.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/AIML-GEEK_-_promptv5-finetuned-CodeLlama7b-Instruct-gguf/blob/main/promptv5-finetuned-CodeLlama7b-Instruct.Q4_K_S.gguf) | Q4_K_S | 3.59GB |
| [promptv5-finetuned-CodeLlama7b-Instruct.Q4_K.gguf](https://huggingface.co/RichardErkhov/AIML-GEEK_-_promptv5-finetuned-CodeLlama7b-Instruct-gguf/blob/main/promptv5-finetuned-CodeLlama7b-Instruct.Q4_K.gguf) | Q4_K | 3.8GB |
| [promptv5-finetuned-CodeLlama7b-Instruct.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/AIML-GEEK_-_promptv5-finetuned-CodeLlama7b-Instruct-gguf/blob/main/promptv5-finetuned-CodeLlama7b-Instruct.Q4_K_M.gguf) | Q4_K_M | 3.8GB |
| [promptv5-finetuned-CodeLlama7b-Instruct.Q4_1.gguf](https://huggingface.co/RichardErkhov/AIML-GEEK_-_promptv5-finetuned-CodeLlama7b-Instruct-gguf/blob/main/promptv5-finetuned-CodeLlama7b-Instruct.Q4_1.gguf) | Q4_1 | 3.95GB |
| [promptv5-finetuned-CodeLlama7b-Instruct.Q5_0.gguf](https://huggingface.co/RichardErkhov/AIML-GEEK_-_promptv5-finetuned-CodeLlama7b-Instruct-gguf/blob/main/promptv5-finetuned-CodeLlama7b-Instruct.Q5_0.gguf) | Q5_0 | 4.33GB |
| [promptv5-finetuned-CodeLlama7b-Instruct.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/AIML-GEEK_-_promptv5-finetuned-CodeLlama7b-Instruct-gguf/blob/main/promptv5-finetuned-CodeLlama7b-Instruct.Q5_K_S.gguf) | Q5_K_S | 4.33GB |
| [promptv5-finetuned-CodeLlama7b-Instruct.Q5_K.gguf](https://huggingface.co/RichardErkhov/AIML-GEEK_-_promptv5-finetuned-CodeLlama7b-Instruct-gguf/blob/main/promptv5-finetuned-CodeLlama7b-Instruct.Q5_K.gguf) | Q5_K | 4.45GB |
| [promptv5-finetuned-CodeLlama7b-Instruct.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/AIML-GEEK_-_promptv5-finetuned-CodeLlama7b-Instruct-gguf/blob/main/promptv5-finetuned-CodeLlama7b-Instruct.Q5_K_M.gguf) | Q5_K_M | 4.45GB |
| [promptv5-finetuned-CodeLlama7b-Instruct.Q5_1.gguf](https://huggingface.co/RichardErkhov/AIML-GEEK_-_promptv5-finetuned-CodeLlama7b-Instruct-gguf/blob/main/promptv5-finetuned-CodeLlama7b-Instruct.Q5_1.gguf) | Q5_1 | 4.72GB |
| [promptv5-finetuned-CodeLlama7b-Instruct.Q6_K.gguf](https://huggingface.co/RichardErkhov/AIML-GEEK_-_promptv5-finetuned-CodeLlama7b-Instruct-gguf/blob/main/promptv5-finetuned-CodeLlama7b-Instruct.Q6_K.gguf) | Q6_K | 5.15GB |
| [promptv5-finetuned-CodeLlama7b-Instruct.Q8_0.gguf](https://huggingface.co/RichardErkhov/AIML-GEEK_-_promptv5-finetuned-CodeLlama7b-Instruct-gguf/blob/main/promptv5-finetuned-CodeLlama7b-Instruct.Q8_0.gguf) | Q8_0 | 6.67GB |
Original model description:
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
neurocoder/Qwen2.5-0.5B-Instruct-MemoryR | neurocoder | "2025-04-06T23:19:36Z" | 0 | 0 | null | [
"tensorboard",
"safetensors",
"en",
"dataset:AI-MO/NuminaMath-TIR",
"arxiv:2504.02273",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.5-0.5B-Instruct",
"license:apache-2.0",
"region:us"
] | null | "2025-04-05T23:54:16Z" | ---
license: apache-2.0
datasets:
- AI-MO/NuminaMath-TIR
language:
- en
metrics:
- accuracy
base_model:
- Qwen/Qwen2.5-0.5B-Instruct
---
# NeuroCoder Qwen2.5-0.5B-Instruct-MemoryR
## Overview
This is the Hugging Face checkpoint of **Qwen2.5-0.5B-Instruct-MemoryR**, a memory-augmented RL-tuned model based on Qwen2.5.
The model is introduced and analyzed in our paper: https://arxiv.org/abs/2504.02273
## Usage
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
# Load tokenizer and model
tokenizer = AutoTokenizer.from_pretrained("neurocoder/Qwen2.5-0.5B-Instruct-MemoryR")
model = AutoModelForCausalLM.from_pretrained("neurocoder/Qwen2.5-0.5B-Instruct-MemoryR")
# Example input
prompt = "What is the capital of France?"
inputs = tokenizer(prompt, return_tensors="pt")
# Generate output
outputs = model.generate(**inputs, max_new_tokens=50)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
``` |
HPLT/translate-en-uk-v2.0-hplt_opus | HPLT | "2025-04-06T23:19:31Z" | 0 | 0 | null | [
"translation",
"en",
"uk",
"arxiv:2503.10267",
"license:cc-by-4.0",
"region:us"
] | translation | "2025-04-06T23:19:17Z" |
---
language:
- en
- uk
tags:
- translation
license: cc-by-4.0
inference: false
---
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
### HPLT MT release v2.0
This repository contains the English-Ukrainian (en->uk) encoder-decoder translation model trained on HPLT v2.0 and OPUS parallel data. The model is currently available in Marian format and we are working on converting it to the Hugging Face format.
### Model Info
* Source language: English
* Target language: Ukrainian
* Data: HPLT v2.0 and OPUS parallel data
* Model architecture: Transformer-base
* Tokenizer: SentencePiece (Unigram)
You can check out our [paper](https://arxiv.org/abs/2503.10267), [GitHub repository](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v2.0), or [website](https://hplt-project.org) for more details.
### Usage
The model has been trained with [MarianNMT](https://github.com/marian-nmt/marian) and the weights are in the Marian format.
#### Using Marian
To run inference with MarianNMT, refer to the [Inference/Decoding/Translation](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0#inferencedecodingtranslation) section of our GitHub repository. You will need the model file `model.npz.best-chrf.npz` and the vocabulary file `model.en-uk.spm` from this repository.
#### Using transformers
We are working on this.
### Acknowledgements
This project has received funding from the European Union's Horizon Europe research and innovation programme under grant agreement No 101070350 and from UK Research and Innovation (UKRI) under the UK government's Horizon Europe funding guarantee [grant number 10052546]
### Citation
If you find this model useful, please cite the following paper:
```bibtex
@article{hpltv2,
title={An Expanded Massive Multilingual Dataset for High-Performance Language Technologies},
author={Laurie Burchell and Ona de Gibert and Nikolay Arefyev and Mikko Aulamo and Marta Bañón and Pinzhen Chen and Mariia Fedorova and Liane Guillou and Barry Haddow and Jan Hajič and Jindřich Helcl and Erik Henriksson and Mateusz Klimaszewski and Ville Komulainen and Andrey Kutuzov and Joona Kytöniemi and Veronika Laippala and Petter Mæhlum and Bhavitvya Malik and Farrokh Mehryary and Vladislav Mikhailov and Nikita Moghe and Amanda Myntti and Dayyán O'Brien and Stephan Oepen and Proyag Pal and Jousia Piha and Sampo Pyysalo and Gema Ramírez-Sánchez and David Samuel and Pavel Stepachev and Jörg Tiedemann and Dušan Variš and Tereza Vojtěchová and Jaume Zaragoza-Bernabeu},
journal={arXiv preprint arXiv:2503.10267},
year={2025},
url={https://arxiv.org/abs/2503.10267},
}
```
|
genki10/Trial3BERT_AugV8_k7_task1_organization_sp020_lw010_fold4 | genki10 | "2025-04-06T23:19:17Z" | 0 | 0 | transformers | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text-classification | "2025-04-06T22:59:10Z" | ---
library_name: transformers
license: apache-2.0
base_model: bert-base-uncased
tags:
- generated_from_trainer
model-index:
- name: Trial3BERT_AugV8_k7_task1_organization_sp020_lw010_fold4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Trial3BERT_AugV8_k7_task1_organization_sp020_lw010_fold4
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6714
- Qwk: 0.4999
- Mse: 0.6714
- Rmse: 0.8194
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 150
### Training results
| Training Loss | Epoch | Step | Validation Loss | Qwk | Mse | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| No log | 1.0 | 5 | 8.9988 | 0.0037 | 8.9988 | 2.9998 |
| No log | 2.0 | 10 | 4.8129 | 0.0023 | 4.8129 | 2.1938 |
| No log | 3.0 | 15 | 2.1529 | 0.1705 | 2.1529 | 1.4673 |
| No log | 4.0 | 20 | 1.1393 | 0.0316 | 1.1393 | 1.0674 |
| No log | 5.0 | 25 | 0.9852 | 0.0559 | 0.9852 | 0.9926 |
| No log | 6.0 | 30 | 0.8211 | 0.2722 | 0.8211 | 0.9061 |
| No log | 7.0 | 35 | 0.6459 | 0.5028 | 0.6459 | 0.8037 |
| No log | 8.0 | 40 | 0.5874 | 0.4492 | 0.5874 | 0.7664 |
| No log | 9.0 | 45 | 0.5656 | 0.5439 | 0.5656 | 0.7520 |
| No log | 10.0 | 50 | 0.6357 | 0.5361 | 0.6357 | 0.7973 |
| No log | 11.0 | 55 | 0.9535 | 0.3655 | 0.9535 | 0.9765 |
| No log | 12.0 | 60 | 0.5033 | 0.5613 | 0.5033 | 0.7094 |
| No log | 13.0 | 65 | 0.5066 | 0.5700 | 0.5066 | 0.7118 |
| No log | 14.0 | 70 | 0.6484 | 0.5180 | 0.6484 | 0.8052 |
| No log | 15.0 | 75 | 0.6914 | 0.5309 | 0.6914 | 0.8315 |
| No log | 16.0 | 80 | 0.5890 | 0.5882 | 0.5890 | 0.7675 |
| No log | 17.0 | 85 | 0.7309 | 0.5239 | 0.7309 | 0.8549 |
| No log | 18.0 | 90 | 0.6221 | 0.5641 | 0.6221 | 0.7888 |
| No log | 19.0 | 95 | 0.9453 | 0.3789 | 0.9453 | 0.9722 |
| No log | 20.0 | 100 | 0.6229 | 0.5271 | 0.6229 | 0.7892 |
| No log | 21.0 | 105 | 0.5749 | 0.5416 | 0.5749 | 0.7582 |
| No log | 22.0 | 110 | 0.5326 | 0.5686 | 0.5326 | 0.7298 |
| No log | 23.0 | 115 | 0.6291 | 0.5256 | 0.6291 | 0.7932 |
| No log | 24.0 | 120 | 0.7939 | 0.4556 | 0.7939 | 0.8910 |
| No log | 25.0 | 125 | 0.6364 | 0.5376 | 0.6364 | 0.7977 |
| No log | 26.0 | 130 | 0.6661 | 0.5346 | 0.6661 | 0.8161 |
| No log | 27.0 | 135 | 0.8633 | 0.4200 | 0.8633 | 0.9291 |
| No log | 28.0 | 140 | 0.8986 | 0.4098 | 0.8986 | 0.9479 |
| No log | 29.0 | 145 | 0.6249 | 0.5388 | 0.6249 | 0.7905 |
| No log | 30.0 | 150 | 0.5199 | 0.5914 | 0.5199 | 0.7211 |
| No log | 31.0 | 155 | 0.5676 | 0.5530 | 0.5676 | 0.7534 |
| No log | 32.0 | 160 | 0.6683 | 0.5186 | 0.6683 | 0.8175 |
| No log | 33.0 | 165 | 0.7017 | 0.5000 | 0.7017 | 0.8377 |
| No log | 34.0 | 170 | 0.5560 | 0.5487 | 0.5560 | 0.7456 |
| No log | 35.0 | 175 | 0.5765 | 0.5561 | 0.5765 | 0.7593 |
| No log | 36.0 | 180 | 0.5661 | 0.5759 | 0.5661 | 0.7524 |
| No log | 37.0 | 185 | 0.5691 | 0.5593 | 0.5691 | 0.7544 |
| No log | 38.0 | 190 | 0.7085 | 0.4907 | 0.7085 | 0.8417 |
| No log | 39.0 | 195 | 0.6271 | 0.5456 | 0.6271 | 0.7919 |
| No log | 40.0 | 200 | 0.7688 | 0.4181 | 0.7688 | 0.8768 |
| No log | 41.0 | 205 | 0.5732 | 0.5649 | 0.5732 | 0.7571 |
| No log | 42.0 | 210 | 0.5491 | 0.5656 | 0.5491 | 0.7410 |
| No log | 43.0 | 215 | 0.9099 | 0.3698 | 0.9099 | 0.9539 |
| No log | 44.0 | 220 | 0.5444 | 0.5725 | 0.5444 | 0.7379 |
| No log | 45.0 | 225 | 0.6714 | 0.4999 | 0.6714 | 0.8194 |
### Framework versions
- Transformers 4.47.0
- Pytorch 2.5.1+cu121
- Datasets 3.3.1
- Tokenizers 0.21.0
|
HPLT/translate-uk-en-v2.0-hplt_opus | HPLT | "2025-04-06T23:19:09Z" | 0 | 0 | null | [
"translation",
"uk",
"en",
"arxiv:2503.10267",
"license:cc-by-4.0",
"region:us"
] | translation | "2025-04-06T23:18:51Z" |
---
language:
- uk
- en
tags:
- translation
license: cc-by-4.0
inference: false
---
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
### HPLT MT release v2.0
This repository contains the Ukrainian-English (uk->en) encoder-decoder translation model trained on HPLT v2.0 and OPUS parallel data. The model is currently available in Marian format and we are working on converting it to the Hugging Face format.
### Model Info
* Source language: Ukrainian
* Target language: English
* Data: HPLT v2.0 and OPUS parallel data
* Model architecture: Transformer-base
* Tokenizer: SentencePiece (Unigram)
You can check out our [paper](https://arxiv.org/abs/2503.10267), [GitHub repository](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v2.0), or [website](https://hplt-project.org) for more details.
### Usage
The model has been trained with [MarianNMT](https://github.com/marian-nmt/marian) and the weights are in the Marian format.
#### Using Marian
To run inference with MarianNMT, refer to the [Inference/Decoding/Translation](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0#inferencedecodingtranslation) section of our GitHub repository. You will need the model file `model.npz.best-chrf.npz` and the vocabulary file `model.uk-en.spm` from this repository.
#### Using transformers
We are working on this.
### Acknowledgements
This project has received funding from the European Union's Horizon Europe research and innovation programme under grant agreement No 101070350 and from UK Research and Innovation (UKRI) under the UK government's Horizon Europe funding guarantee [grant number 10052546]
### Citation
If you find this model useful, please cite the following paper:
```bibtex
@article{hpltv2,
title={An Expanded Massive Multilingual Dataset for High-Performance Language Technologies},
author={Laurie Burchell and Ona de Gibert and Nikolay Arefyev and Mikko Aulamo and Marta Bañón and Pinzhen Chen and Mariia Fedorova and Liane Guillou and Barry Haddow and Jan Hajič and Jindřich Helcl and Erik Henriksson and Mateusz Klimaszewski and Ville Komulainen and Andrey Kutuzov and Joona Kytöniemi and Veronika Laippala and Petter Mæhlum and Bhavitvya Malik and Farrokh Mehryary and Vladislav Mikhailov and Nikita Moghe and Amanda Myntti and Dayyán O'Brien and Stephan Oepen and Proyag Pal and Jousia Piha and Sampo Pyysalo and Gema Ramírez-Sánchez and David Samuel and Pavel Stepachev and Jörg Tiedemann and Dušan Variš and Tereza Vojtěchová and Jaume Zaragoza-Bernabeu},
journal={arXiv preprint arXiv:2503.10267},
year={2025},
url={https://arxiv.org/abs/2503.10267},
}
```
|
tinycompany/Llamaify-T0.6-3B | tinycompany | "2025-04-06T23:19:04Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"llama",
"text-generation",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | text-generation | "2025-04-06T23:12:06Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
HPLT/translate-en-te-v2.0-hplt_opus | HPLT | "2025-04-06T23:18:40Z" | 0 | 0 | null | [
"translation",
"en",
"te",
"arxiv:2503.10267",
"license:cc-by-4.0",
"region:us"
] | translation | "2025-04-06T23:18:21Z" |
---
language:
- en
- te
tags:
- translation
license: cc-by-4.0
inference: false
---
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
### HPLT MT release v2.0
This repository contains the English-Telugu (en->te) encoder-decoder translation model trained on HPLT v2.0 and OPUS parallel data. The model is currently available in Marian format and we are working on converting it to the Hugging Face format.
### Model Info
* Source language: English
* Target language: Telugu
* Data: HPLT v2.0 and OPUS parallel data
* Model architecture: Transformer-base
* Tokenizer: SentencePiece (Unigram)
You can check out our [paper](https://arxiv.org/abs/2503.10267), [GitHub repository](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v2.0), or [website](https://hplt-project.org) for more details.
### Usage
The model has been trained with [MarianNMT](https://github.com/marian-nmt/marian) and the weights are in the Marian format.
#### Using Marian
To run inference with MarianNMT, refer to the [Inference/Decoding/Translation](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0#inferencedecodingtranslation) section of our GitHub repository. You will need the model file `model.npz.best-chrf.npz` and the vocabulary file `model.en-te.spm` from this repository.
#### Using transformers
We are working on this.
### Acknowledgements
This project has received funding from the European Union's Horizon Europe research and innovation programme under grant agreement No 101070350 and from UK Research and Innovation (UKRI) under the UK government's Horizon Europe funding guarantee [grant number 10052546]
### Citation
If you find this model useful, please cite the following paper:
```bibtex
@article{hpltv2,
title={An Expanded Massive Multilingual Dataset for High-Performance Language Technologies},
author={Laurie Burchell and Ona de Gibert and Nikolay Arefyev and Mikko Aulamo and Marta Bañón and Pinzhen Chen and Mariia Fedorova and Liane Guillou and Barry Haddow and Jan Hajič and Jindřich Helcl and Erik Henriksson and Mateusz Klimaszewski and Ville Komulainen and Andrey Kutuzov and Joona Kytöniemi and Veronika Laippala and Petter Mæhlum and Bhavitvya Malik and Farrokh Mehryary and Vladislav Mikhailov and Nikita Moghe and Amanda Myntti and Dayyán O'Brien and Stephan Oepen and Proyag Pal and Jousia Piha and Sampo Pyysalo and Gema Ramírez-Sánchez and David Samuel and Pavel Stepachev and Jörg Tiedemann and Dušan Variš and Tereza Vojtěchová and Jaume Zaragoza-Bernabeu},
journal={arXiv preprint arXiv:2503.10267},
year={2025},
url={https://arxiv.org/abs/2503.10267},
}
```
|
HPLT/translate-te-en-v2.0-hplt_opus | HPLT | "2025-04-06T23:18:13Z" | 0 | 0 | null | [
"translation",
"te",
"en",
"arxiv:2503.10267",
"license:cc-by-4.0",
"region:us"
] | translation | "2025-04-06T23:17:58Z" |
---
language:
- te
- en
tags:
- translation
license: cc-by-4.0
inference: false
---
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
### HPLT MT release v2.0
This repository contains the Telugu-English (te->en) encoder-decoder translation model trained on HPLT v2.0 and OPUS parallel data. The model is currently available in Marian format and we are working on converting it to the Hugging Face format.
### Model Info
* Source language: Telugu
* Target language: English
* Data: HPLT v2.0 and OPUS parallel data
* Model architecture: Transformer-base
* Tokenizer: SentencePiece (Unigram)
You can check out our [paper](https://arxiv.org/abs/2503.10267), [GitHub repository](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v2.0), or [website](https://hplt-project.org) for more details.
### Usage
The model has been trained with [MarianNMT](https://github.com/marian-nmt/marian) and the weights are in the Marian format.
#### Using Marian
To run inference with MarianNMT, refer to the [Inference/Decoding/Translation](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0#inferencedecodingtranslation) section of our GitHub repository. You will need the model file `model.npz.best-chrf.npz` and the vocabulary file `model.te-en.spm` from this repository.
#### Using transformers
We are working on this.
### Acknowledgements
This project has received funding from the European Union's Horizon Europe research and innovation programme under grant agreement No 101070350 and from UK Research and Innovation (UKRI) under the UK government's Horizon Europe funding guarantee [grant number 10052546]
### Citation
If you find this model useful, please cite the following paper:
```bibtex
@article{hpltv2,
title={An Expanded Massive Multilingual Dataset for High-Performance Language Technologies},
author={Laurie Burchell and Ona de Gibert and Nikolay Arefyev and Mikko Aulamo and Marta Bañón and Pinzhen Chen and Mariia Fedorova and Liane Guillou and Barry Haddow and Jan Hajič and Jindřich Helcl and Erik Henriksson and Mateusz Klimaszewski and Ville Komulainen and Andrey Kutuzov and Joona Kytöniemi and Veronika Laippala and Petter Mæhlum and Bhavitvya Malik and Farrokh Mehryary and Vladislav Mikhailov and Nikita Moghe and Amanda Myntti and Dayyán O'Brien and Stephan Oepen and Proyag Pal and Jousia Piha and Sampo Pyysalo and Gema Ramírez-Sánchez and David Samuel and Pavel Stepachev and Jörg Tiedemann and Dušan Variš and Tereza Vojtěchová and Jaume Zaragoza-Bernabeu},
journal={arXiv preprint arXiv:2503.10267},
year={2025},
url={https://arxiv.org/abs/2503.10267},
}
```
|
stabgan/gemma-3-checkpoint-20250406_231730 | stabgan | "2025-04-06T23:18:09Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"gemma3",
"trl",
"en",
"base_model:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"base_model:finetune:unsloth/gemma-3-4b-it-unsloth-bnb-4bit",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | "2025-04-06T23:17:37Z" | ---
base_model: unsloth/gemma-3-4b-it-unsloth-bnb-4bit
tags:
- text-generation-inference
- transformers
- unsloth
- gemma3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** stabgan
- **License:** apache-2.0
- **Finetuned from model :** unsloth/gemma-3-4b-it-unsloth-bnb-4bit
This gemma3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
HPLT/translate-en-ko-v2.0-hplt_opus | HPLT | "2025-04-06T23:15:14Z" | 0 | 0 | null | [
"translation",
"en",
"ko",
"arxiv:2503.10267",
"license:cc-by-4.0",
"region:us"
] | translation | "2025-04-06T23:14:58Z" |
---
language:
- en
- ko
tags:
- translation
license: cc-by-4.0
inference: false
---
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
### HPLT MT release v2.0
This repository contains the English-Korean (en->ko) encoder-decoder translation model trained on HPLT v2.0 and OPUS parallel data. The model is currently available in Marian format and we are working on converting it to the Hugging Face format.
### Model Info
* Source language: English
* Target language: Korean
* Data: HPLT v2.0 and OPUS parallel data
* Model architecture: Transformer-base
* Tokenizer: SentencePiece (Unigram)
You can check out our [paper](https://arxiv.org/abs/2503.10267), [GitHub repository](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v2.0), or [website](https://hplt-project.org) for more details.
### Usage
The model has been trained with [MarianNMT](https://github.com/marian-nmt/marian) and the weights are in the Marian format.
#### Using Marian
To run inference with MarianNMT, refer to the [Inference/Decoding/Translation](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0#inferencedecodingtranslation) section of our GitHub repository. You will need the model file `model.npz.best-chrf.npz` and the vocabulary file `model.en-ko.spm` from this repository.
#### Using transformers
We are working on this.
### Acknowledgements
This project has received funding from the European Union's Horizon Europe research and innovation programme under grant agreement No 101070350 and from UK Research and Innovation (UKRI) under the UK government's Horizon Europe funding guarantee [grant number 10052546]
### Citation
If you find this model useful, please cite the following paper:
```bibtex
@article{hpltv2,
title={An Expanded Massive Multilingual Dataset for High-Performance Language Technologies},
author={Laurie Burchell and Ona de Gibert and Nikolay Arefyev and Mikko Aulamo and Marta Bañón and Pinzhen Chen and Mariia Fedorova and Liane Guillou and Barry Haddow and Jan Hajič and Jindřich Helcl and Erik Henriksson and Mateusz Klimaszewski and Ville Komulainen and Andrey Kutuzov and Joona Kytöniemi and Veronika Laippala and Petter Mæhlum and Bhavitvya Malik and Farrokh Mehryary and Vladislav Mikhailov and Nikita Moghe and Amanda Myntti and Dayyán O'Brien and Stephan Oepen and Proyag Pal and Jousia Piha and Sampo Pyysalo and Gema Ramírez-Sánchez and David Samuel and Pavel Stepachev and Jörg Tiedemann and Dušan Variš and Tereza Vojtěchová and Jaume Zaragoza-Bernabeu},
journal={arXiv preprint arXiv:2503.10267},
year={2025},
url={https://arxiv.org/abs/2503.10267},
}
```
|
HPLT/translate-ko-en-v2.0-hplt_opus | HPLT | "2025-04-06T23:14:45Z" | 0 | 0 | null | [
"translation",
"ko",
"en",
"arxiv:2503.10267",
"license:cc-by-4.0",
"region:us"
] | translation | "2025-04-06T23:14:30Z" |
---
language:
- ko
- en
tags:
- translation
license: cc-by-4.0
inference: false
---
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
### HPLT MT release v2.0
This repository contains the Korean-English (ko->en) encoder-decoder translation model trained on HPLT v2.0 and OPUS parallel data. The model is currently available in Marian format and we are working on converting it to the Hugging Face format.
### Model Info
* Source language: Korean
* Target language: English
* Data: HPLT v2.0 and OPUS parallel data
* Model architecture: Transformer-base
* Tokenizer: SentencePiece (Unigram)
You can check out our [paper](https://arxiv.org/abs/2503.10267), [GitHub repository](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v2.0), or [website](https://hplt-project.org) for more details.
### Usage
The model has been trained with [MarianNMT](https://github.com/marian-nmt/marian) and the weights are in the Marian format.
#### Using Marian
To run inference with MarianNMT, refer to the [Inference/Decoding/Translation](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0#inferencedecodingtranslation) section of our GitHub repository. You will need the model file `model.npz.best-chrf.npz` and the vocabulary file `model.ko-en.spm` from this repository.
#### Using transformers
We are working on this.
### Acknowledgements
This project has received funding from the European Union's Horizon Europe research and innovation programme under grant agreement No 101070350 and from UK Research and Innovation (UKRI) under the UK government's Horizon Europe funding guarantee [grant number 10052546]
### Citation
If you find this model useful, please cite the following paper:
```bibtex
@article{hpltv2,
title={An Expanded Massive Multilingual Dataset for High-Performance Language Technologies},
author={Laurie Burchell and Ona de Gibert and Nikolay Arefyev and Mikko Aulamo and Marta Bañón and Pinzhen Chen and Mariia Fedorova and Liane Guillou and Barry Haddow and Jan Hajič and Jindřich Helcl and Erik Henriksson and Mateusz Klimaszewski and Ville Komulainen and Andrey Kutuzov and Joona Kytöniemi and Veronika Laippala and Petter Mæhlum and Bhavitvya Malik and Farrokh Mehryary and Vladislav Mikhailov and Nikita Moghe and Amanda Myntti and Dayyán O'Brien and Stephan Oepen and Proyag Pal and Jousia Piha and Sampo Pyysalo and Gema Ramírez-Sánchez and David Samuel and Pavel Stepachev and Jörg Tiedemann and Dušan Variš and Tereza Vojtěchová and Jaume Zaragoza-Bernabeu},
journal={arXiv preprint arXiv:2503.10267},
year={2025},
url={https://arxiv.org/abs/2503.10267},
}
```
|
HPLT/translate-ja-en-v2.0-hplt_opus | HPLT | "2025-04-06T23:13:49Z" | 0 | 0 | null | [
"translation",
"ja",
"en",
"arxiv:2503.10267",
"license:cc-by-4.0",
"region:us"
] | translation | "2025-04-06T23:13:32Z" |
---
language:
- ja
- en
tags:
- translation
license: cc-by-4.0
inference: false
---
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
### HPLT MT release v2.0
This repository contains the Japanese-English (ja->en) encoder-decoder translation model trained on HPLT v2.0 and OPUS parallel data. The model is currently available in Marian format and we are working on converting it to the Hugging Face format.
### Model Info
* Source language: Japanese
* Target language: English
* Data: HPLT v2.0 and OPUS parallel data
* Model architecture: Transformer-base
* Tokenizer: SentencePiece (Unigram)
You can check out our [paper](https://arxiv.org/abs/2503.10267), [GitHub repository](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v2.0), or [website](https://hplt-project.org) for more details.
### Usage
The model has been trained with [MarianNMT](https://github.com/marian-nmt/marian) and the weights are in the Marian format.
#### Using Marian
To run inference with MarianNMT, refer to the [Inference/Decoding/Translation](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0#inferencedecodingtranslation) section of our GitHub repository. You will need the model file `model.npz.best-chrf.npz` and the vocabulary file `model.ja-en.spm` from this repository.
#### Using transformers
We are working on this.
### Acknowledgements
This project has received funding from the European Union's Horizon Europe research and innovation programme under grant agreement No 101070350 and from UK Research and Innovation (UKRI) under the UK government's Horizon Europe funding guarantee [grant number 10052546]
### Citation
If you find this model useful, please cite the following paper:
```bibtex
@article{hpltv2,
title={An Expanded Massive Multilingual Dataset for High-Performance Language Technologies},
author={Laurie Burchell and Ona de Gibert and Nikolay Arefyev and Mikko Aulamo and Marta Bañón and Pinzhen Chen and Mariia Fedorova and Liane Guillou and Barry Haddow and Jan Hajič and Jindřich Helcl and Erik Henriksson and Mateusz Klimaszewski and Ville Komulainen and Andrey Kutuzov and Joona Kytöniemi and Veronika Laippala and Petter Mæhlum and Bhavitvya Malik and Farrokh Mehryary and Vladislav Mikhailov and Nikita Moghe and Amanda Myntti and Dayyán O'Brien and Stephan Oepen and Proyag Pal and Jousia Piha and Sampo Pyysalo and Gema Ramírez-Sánchez and David Samuel and Pavel Stepachev and Jörg Tiedemann and Dušan Variš and Tereza Vojtěchová and Jaume Zaragoza-Bernabeu},
journal={arXiv preprint arXiv:2503.10267},
year={2025},
url={https://arxiv.org/abs/2503.10267},
}
```
|
HPLT/translate-en-ga-v2.0-hplt_opus | HPLT | "2025-04-06T23:12:34Z" | 0 | 0 | null | [
"translation",
"en",
"ga",
"arxiv:2503.10267",
"license:cc-by-4.0",
"region:us"
] | translation | "2025-04-06T23:12:16Z" |
---
language:
- en
- ga
tags:
- translation
license: cc-by-4.0
inference: false
---
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
### HPLT MT release v2.0
This repository contains the English-Irish (en->ga) encoder-decoder translation model trained on HPLT v2.0 and OPUS parallel data. The model is currently available in Marian format and we are working on converting it to the Hugging Face format.
### Model Info
* Source language: English
* Target language: Irish
* Data: HPLT v2.0 and OPUS parallel data
* Model architecture: Transformer-base
* Tokenizer: SentencePiece (Unigram)
You can check out our [paper](https://arxiv.org/abs/2503.10267), [GitHub repository](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v2.0), or [website](https://hplt-project.org) for more details.
### Usage
The model has been trained with [MarianNMT](https://github.com/marian-nmt/marian) and the weights are in the Marian format.
#### Using Marian
To run inference with MarianNMT, refer to the [Inference/Decoding/Translation](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0#inferencedecodingtranslation) section of our GitHub repository. You will need the model file `model.npz.best-chrf.npz` and the vocabulary file `model.en-ga.spm` from this repository.
#### Using transformers
We are working on this.
### Acknowledgements
This project has received funding from the European Union's Horizon Europe research and innovation programme under grant agreement No 101070350 and from UK Research and Innovation (UKRI) under the UK government's Horizon Europe funding guarantee [grant number 10052546]
### Citation
If you find this model useful, please cite the following paper:
```bibtex
@article{hpltv2,
title={An Expanded Massive Multilingual Dataset for High-Performance Language Technologies},
author={Laurie Burchell and Ona de Gibert and Nikolay Arefyev and Mikko Aulamo and Marta Bañón and Pinzhen Chen and Mariia Fedorova and Liane Guillou and Barry Haddow and Jan Hajič and Jindřich Helcl and Erik Henriksson and Mateusz Klimaszewski and Ville Komulainen and Andrey Kutuzov and Joona Kytöniemi and Veronika Laippala and Petter Mæhlum and Bhavitvya Malik and Farrokh Mehryary and Vladislav Mikhailov and Nikita Moghe and Amanda Myntti and Dayyán O'Brien and Stephan Oepen and Proyag Pal and Jousia Piha and Sampo Pyysalo and Gema Ramírez-Sánchez and David Samuel and Pavel Stepachev and Jörg Tiedemann and Dušan Variš and Tereza Vojtěchová and Jaume Zaragoza-Bernabeu},
journal={arXiv preprint arXiv:2503.10267},
year={2025},
url={https://arxiv.org/abs/2503.10267},
}
```
|
HPLT/translate-en-et-v2.0-hplt_opus | HPLT | "2025-04-06T23:11:42Z" | 0 | 0 | null | [
"translation",
"en",
"et",
"arxiv:2503.10267",
"license:cc-by-4.0",
"region:us"
] | translation | "2025-04-06T23:11:31Z" |
---
language:
- en
- et
tags:
- translation
license: cc-by-4.0
inference: false
---
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
### HPLT MT release v2.0
This repository contains the English-Estonian (en->et) encoder-decoder translation model trained on HPLT v2.0 and OPUS parallel data. The model is currently available in Marian format and we are working on converting it to the Hugging Face format.
### Model Info
* Source language: English
* Target language: Estonian
* Data: HPLT v2.0 and OPUS parallel data
* Model architecture: Transformer-base
* Tokenizer: SentencePiece (Unigram)
You can check out our [paper](https://arxiv.org/abs/2503.10267), [GitHub repository](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v2.0), or [website](https://hplt-project.org) for more details.
### Usage
The model has been trained with [MarianNMT](https://github.com/marian-nmt/marian) and the weights are in the Marian format.
#### Using Marian
To run inference with MarianNMT, refer to the [Inference/Decoding/Translation](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0#inferencedecodingtranslation) section of our GitHub repository. You will need the model file `model.npz.best-chrf.npz` and the vocabulary file `model.en-et.spm` from this repository.
#### Using transformers
We are working on this.
### Acknowledgements
This project has received funding from the European Union's Horizon Europe research and innovation programme under grant agreement No 101070350 and from UK Research and Innovation (UKRI) under the UK government's Horizon Europe funding guarantee [grant number 10052546]
### Citation
If you find this model useful, please cite the following paper:
```bibtex
@article{hpltv2,
title={An Expanded Massive Multilingual Dataset for High-Performance Language Technologies},
author={Laurie Burchell and Ona de Gibert and Nikolay Arefyev and Mikko Aulamo and Marta Bañón and Pinzhen Chen and Mariia Fedorova and Liane Guillou and Barry Haddow and Jan Hajič and Jindřich Helcl and Erik Henriksson and Mateusz Klimaszewski and Ville Komulainen and Andrey Kutuzov and Joona Kytöniemi and Veronika Laippala and Petter Mæhlum and Bhavitvya Malik and Farrokh Mehryary and Vladislav Mikhailov and Nikita Moghe and Amanda Myntti and Dayyán O'Brien and Stephan Oepen and Proyag Pal and Jousia Piha and Sampo Pyysalo and Gema Ramírez-Sánchez and David Samuel and Pavel Stepachev and Jörg Tiedemann and Dušan Variš and Tereza Vojtěchová and Jaume Zaragoza-Bernabeu},
journal={arXiv preprint arXiv:2503.10267},
year={2025},
url={https://arxiv.org/abs/2503.10267},
}
```
|
diliash/emuLM-spt-colored-rounded-dora | diliash | "2025-04-06T23:11:04Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"dora_run_rounded_colored_20250406_213804",
"20250406_213804",
"lora-finetuning",
"generated_from_trainer",
"final-model",
"processor",
"base_model:meta-llama/Llama-3.2-11B-Vision-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-11B-Vision-Instruct",
"license:llama3.2",
"endpoints_compatible",
"region:us"
] | null | "2025-04-05T15:19:33Z" | ---
library_name: transformers
license: llama3.2
base_model: meta-llama/Llama-3.2-11B-Vision-Instruct
tags:
- dora_run_rounded_colored_20250406_213804
- '20250406_213804'
- lora-finetuning
- generated_from_trainer
- final-model
- processor
model-index:
- name: checkpoints
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# checkpoints
This model is a fine-tuned version of [meta-llama/Llama-3.2-11B-Vision-Instruct](https://huggingface.co/meta-llama/Llama-3.2-11B-Vision-Instruct) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 2
- total_eval_batch_size: 2
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 3
### Framework versions
- Transformers 4.50.3
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
|
HPLT/translate-az-en-v2.0-hplt_opus | HPLT | "2025-04-06T23:10:12Z" | 0 | 0 | null | [
"translation",
"az",
"en",
"arxiv:2503.10267",
"license:cc-by-4.0",
"region:us"
] | translation | "2025-04-06T23:09:57Z" |
---
language:
- az
- en
tags:
- translation
license: cc-by-4.0
inference: false
---
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
### HPLT MT release v2.0
This repository contains the Azerbaijani-English (az->en) encoder-decoder translation model trained on HPLT v2.0 and OPUS parallel data. The model is currently available in Marian format and we are working on converting it to the Hugging Face format.
### Model Info
* Source language: Azerbaijani
* Target language: English
* Data: HPLT v2.0 and OPUS parallel data
* Model architecture: Transformer-base
* Tokenizer: SentencePiece (Unigram)
You can check out our [paper](https://arxiv.org/abs/2503.10267), [GitHub repository](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v2.0), or [website](https://hplt-project.org) for more details.
### Usage
The model has been trained with [MarianNMT](https://github.com/marian-nmt/marian) and the weights are in the Marian format.
#### Using Marian
To run inference with MarianNMT, refer to the [Inference/Decoding/Translation](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0#inferencedecodingtranslation) section of our GitHub repository. You will need the model file `model.npz.best-chrf.npz` and the vocabulary file `model.az-en.spm` from this repository.
#### Using transformers
We are working on this.
### Acknowledgements
This project has received funding from the European Union's Horizon Europe research and innovation programme under grant agreement No 101070350 and from UK Research and Innovation (UKRI) under the UK government's Horizon Europe funding guarantee [grant number 10052546]
### Citation
If you find this model useful, please cite the following paper:
```bibtex
@article{hpltv2,
title={An Expanded Massive Multilingual Dataset for High-Performance Language Technologies},
author={Laurie Burchell and Ona de Gibert and Nikolay Arefyev and Mikko Aulamo and Marta Bañón and Pinzhen Chen and Mariia Fedorova and Liane Guillou and Barry Haddow and Jan Hajič and Jindřich Helcl and Erik Henriksson and Mateusz Klimaszewski and Ville Komulainen and Andrey Kutuzov and Joona Kytöniemi and Veronika Laippala and Petter Mæhlum and Bhavitvya Malik and Farrokh Mehryary and Vladislav Mikhailov and Nikita Moghe and Amanda Myntti and Dayyán O'Brien and Stephan Oepen and Proyag Pal and Jousia Piha and Sampo Pyysalo and Gema Ramírez-Sánchez and David Samuel and Pavel Stepachev and Jörg Tiedemann and Dušan Variš and Tereza Vojtěchová and Jaume Zaragoza-Bernabeu},
journal={arXiv preprint arXiv:2503.10267},
year={2025},
url={https://arxiv.org/abs/2503.10267},
}
```
|
mikeogezi/data_wp_output_gpt_4o_mini_style_595404_llama-3.2-1b-instruct_lora_128_sample_500 | mikeogezi | "2025-04-06T23:09:31Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-04-03T12:26:30Z" | ---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
HPLT/translate-tr-en-v2.0-hplt_opus | HPLT | "2025-04-06T23:08:35Z" | 0 | 0 | null | [
"translation",
"tr",
"en",
"arxiv:2503.10267",
"license:cc-by-4.0",
"region:us"
] | translation | "2025-04-06T23:08:18Z" |
---
language:
- tr
- en
tags:
- translation
license: cc-by-4.0
inference: false
---
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
### HPLT MT release v2.0
This repository contains the Turkish-English (tr->en) encoder-decoder translation model trained on HPLT v2.0 and OPUS parallel data. The model is currently available in Marian format and we are working on converting it to the Hugging Face format.
### Model Info
* Source language: Turkish
* Target language: English
* Data: HPLT v2.0 and OPUS parallel data
* Model architecture: Transformer-base
* Tokenizer: SentencePiece (Unigram)
You can check out our [paper](https://arxiv.org/abs/2503.10267), [GitHub repository](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v2.0), or [website](https://hplt-project.org) for more details.
### Usage
The model has been trained with [MarianNMT](https://github.com/marian-nmt/marian) and the weights are in the Marian format.
#### Using Marian
To run inference with MarianNMT, refer to the [Inference/Decoding/Translation](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0#inferencedecodingtranslation) section of our GitHub repository. You will need the model file `model.npz.best-chrf.npz` and the vocabulary file `model.tr-en.spm` from this repository.
#### Using transformers
We are working on this.
### Acknowledgements
This project has received funding from the European Union's Horizon Europe research and innovation programme under grant agreement No 101070350 and from UK Research and Innovation (UKRI) under the UK government's Horizon Europe funding guarantee [grant number 10052546]
### Citation
If you find this model useful, please cite the following paper:
```bibtex
@article{hpltv2,
title={An Expanded Massive Multilingual Dataset for High-Performance Language Technologies},
author={Laurie Burchell and Ona de Gibert and Nikolay Arefyev and Mikko Aulamo and Marta Bañón and Pinzhen Chen and Mariia Fedorova and Liane Guillou and Barry Haddow and Jan Hajič and Jindřich Helcl and Erik Henriksson and Mateusz Klimaszewski and Ville Komulainen and Andrey Kutuzov and Joona Kytöniemi and Veronika Laippala and Petter Mæhlum and Bhavitvya Malik and Farrokh Mehryary and Vladislav Mikhailov and Nikita Moghe and Amanda Myntti and Dayyán O'Brien and Stephan Oepen and Proyag Pal and Jousia Piha and Sampo Pyysalo and Gema Ramírez-Sánchez and David Samuel and Pavel Stepachev and Jörg Tiedemann and Dušan Variš and Tereza Vojtěchová and Jaume Zaragoza-Bernabeu},
journal={arXiv preprint arXiv:2503.10267},
year={2025},
url={https://arxiv.org/abs/2503.10267},
}
```
|
HPLT/translate-en-sw-v2.0-hplt_opus | HPLT | "2025-04-06T23:08:10Z" | 0 | 0 | null | [
"translation",
"en",
"sw",
"arxiv:2503.10267",
"license:cc-by-4.0",
"region:us"
] | translation | "2025-04-06T23:07:53Z" |
---
language:
- en
- sw
tags:
- translation
license: cc-by-4.0
inference: false
---
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
### HPLT MT release v2.0
This repository contains the English-Swahili (en->sw) encoder-decoder translation model trained on HPLT v2.0 and OPUS parallel data. The model is currently available in Marian format and we are working on converting it to the Hugging Face format.
### Model Info
* Source language: English
* Target language: Swahili
* Data: HPLT v2.0 and OPUS parallel data
* Model architecture: Transformer-base
* Tokenizer: SentencePiece (Unigram)
You can check out our [paper](https://arxiv.org/abs/2503.10267), [GitHub repository](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v2.0), or [website](https://hplt-project.org) for more details.
### Usage
The model has been trained with [MarianNMT](https://github.com/marian-nmt/marian) and the weights are in the Marian format.
#### Using Marian
To run inference with MarianNMT, refer to the [Inference/Decoding/Translation](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0#inferencedecodingtranslation) section of our GitHub repository. You will need the model file `model.npz.best-chrf.npz` and the vocabulary file `model.en-sw.spm` from this repository.
#### Using transformers
We are working on this.
### Acknowledgements
This project has received funding from the European Union's Horizon Europe research and innovation programme under grant agreement No 101070350 and from UK Research and Innovation (UKRI) under the UK government's Horizon Europe funding guarantee [grant number 10052546]
### Citation
If you find this model useful, please cite the following paper:
```bibtex
@article{hpltv2,
title={An Expanded Massive Multilingual Dataset for High-Performance Language Technologies},
author={Laurie Burchell and Ona de Gibert and Nikolay Arefyev and Mikko Aulamo and Marta Bañón and Pinzhen Chen and Mariia Fedorova and Liane Guillou and Barry Haddow and Jan Hajič and Jindřich Helcl and Erik Henriksson and Mateusz Klimaszewski and Ville Komulainen and Andrey Kutuzov and Joona Kytöniemi and Veronika Laippala and Petter Mæhlum and Bhavitvya Malik and Farrokh Mehryary and Vladislav Mikhailov and Nikita Moghe and Amanda Myntti and Dayyán O'Brien and Stephan Oepen and Proyag Pal and Jousia Piha and Sampo Pyysalo and Gema Ramírez-Sánchez and David Samuel and Pavel Stepachev and Jörg Tiedemann and Dušan Variš and Tereza Vojtěchová and Jaume Zaragoza-Bernabeu},
journal={arXiv preprint arXiv:2503.10267},
year={2025},
url={https://arxiv.org/abs/2503.10267},
}
```
|
drewThomasson/fineTunedTTSModels | drewThomasson | "2025-04-06T23:07:14Z" | 0 | 4 | null | [
"onnx",
"license:apache-2.0",
"region:us"
] | null | "2024-12-13T00:00:24Z" | ---
license: apache-2.0
---
|
HPLT/translate-en-nb-v2.0-hplt_opus | HPLT | "2025-04-06T23:06:26Z" | 0 | 0 | null | [
"translation",
"en",
"nb",
"arxiv:2503.10267",
"license:cc-by-4.0",
"region:us"
] | translation | "2025-04-06T23:06:25Z" |
---
language:
- en
- nb
tags:
- translation
license: cc-by-4.0
inference: false
---
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
### HPLT MT release v2.0
This repository contains the English-Norwegian Bokmål (en->nb) encoder-decoder translation model trained on HPLT v2.0 and OPUS parallel data. The model is currently available in Marian format and we are working on converting it to the Hugging Face format.
### Model Info
* Source language: English
* Target language: Norwegian Bokmål
* Data: HPLT v2.0 and OPUS parallel data
* Model architecture: Transformer-base
* Tokenizer: SentencePiece (Unigram)
You can check out our [paper](https://arxiv.org/abs/2503.10267), [GitHub repository](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v2.0), or [website](https://hplt-project.org) for more details.
### Usage
The model has been trained with [MarianNMT](https://github.com/marian-nmt/marian) and the weights are in the Marian format.
#### Using Marian
To run inference with MarianNMT, refer to the [Inference/Decoding/Translation](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0#inferencedecodingtranslation) section of our GitHub repository. You will need the model file `model.npz.best-chrf.npz` and the vocabulary file `model.en-nb.spm` from this repository.
#### Using transformers
We are working on this.
### Acknowledgements
This project has received funding from the European Union's Horizon Europe research and innovation programme under grant agreement No 101070350 and from UK Research and Innovation (UKRI) under the UK government's Horizon Europe funding guarantee [grant number 10052546]
### Citation
If you find this model useful, please cite the following paper:
```bibtex
@article{hpltv2,
title={An Expanded Massive Multilingual Dataset for High-Performance Language Technologies},
author={Laurie Burchell and Ona de Gibert and Nikolay Arefyev and Mikko Aulamo and Marta Bañón and Pinzhen Chen and Mariia Fedorova and Liane Guillou and Barry Haddow and Jan Hajič and Jindřich Helcl and Erik Henriksson and Mateusz Klimaszewski and Ville Komulainen and Andrey Kutuzov and Joona Kytöniemi and Veronika Laippala and Petter Mæhlum and Bhavitvya Malik and Farrokh Mehryary and Vladislav Mikhailov and Nikita Moghe and Amanda Myntti and Dayyán O'Brien and Stephan Oepen and Proyag Pal and Jousia Piha and Sampo Pyysalo and Gema Ramírez-Sánchez and David Samuel and Pavel Stepachev and Jörg Tiedemann and Dušan Variš and Tereza Vojtěchová and Jaume Zaragoza-Bernabeu},
journal={arXiv preprint arXiv:2503.10267},
year={2025},
url={https://arxiv.org/abs/2503.10267},
}
```
|
HPLT/translate-nb-en-v2.0-hplt_opus | HPLT | "2025-04-06T23:06:18Z" | 0 | 0 | null | [
"translation",
"nb",
"en",
"arxiv:2503.10267",
"license:cc-by-4.0",
"region:us"
] | translation | "2025-04-06T23:06:17Z" |
---
language:
- nb
- en
tags:
- translation
license: cc-by-4.0
inference: false
---
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
### HPLT MT release v2.0
This repository contains the Norwegian Bokmål-English (nb->en) encoder-decoder translation model trained on HPLT v2.0 and OPUS parallel data. The model is currently available in Marian format and we are working on converting it to the Hugging Face format.
### Model Info
* Source language: Norwegian Bokmål
* Target language: English
* Data: HPLT v2.0 and OPUS parallel data
* Model architecture: Transformer-base
* Tokenizer: SentencePiece (Unigram)
You can check out our [paper](https://arxiv.org/abs/2503.10267), [GitHub repository](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v2.0), or [website](https://hplt-project.org) for more details.
### Usage
The model has been trained with [MarianNMT](https://github.com/marian-nmt/marian) and the weights are in the Marian format.
#### Using Marian
To run inference with MarianNMT, refer to the [Inference/Decoding/Translation](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0#inferencedecodingtranslation) section of our GitHub repository. You will need the model file `model.npz.best-chrf.npz` and the vocabulary file `model.nb-en.spm` from this repository.
#### Using transformers
We are working on this.
### Acknowledgements
This project has received funding from the European Union's Horizon Europe research and innovation programme under grant agreement No 101070350 and from UK Research and Innovation (UKRI) under the UK government's Horizon Europe funding guarantee [grant number 10052546]
### Citation
If you find this model useful, please cite the following paper:
```bibtex
@article{hpltv2,
title={An Expanded Massive Multilingual Dataset for High-Performance Language Technologies},
author={Laurie Burchell and Ona de Gibert and Nikolay Arefyev and Mikko Aulamo and Marta Bañón and Pinzhen Chen and Mariia Fedorova and Liane Guillou and Barry Haddow and Jan Hajič and Jindřich Helcl and Erik Henriksson and Mateusz Klimaszewski and Ville Komulainen and Andrey Kutuzov and Joona Kytöniemi and Veronika Laippala and Petter Mæhlum and Bhavitvya Malik and Farrokh Mehryary and Vladislav Mikhailov and Nikita Moghe and Amanda Myntti and Dayyán O'Brien and Stephan Oepen and Proyag Pal and Jousia Piha and Sampo Pyysalo and Gema Ramírez-Sánchez and David Samuel and Pavel Stepachev and Jörg Tiedemann and Dušan Variš and Tereza Vojtěchová and Jaume Zaragoza-Bernabeu},
journal={arXiv preprint arXiv:2503.10267},
year={2025},
url={https://arxiv.org/abs/2503.10267},
}
```
|
HPLT/translate-ml-en-v2.0-hplt_opus | HPLT | "2025-04-06T23:05:45Z" | 0 | 0 | null | [
"translation",
"ml",
"en",
"arxiv:2503.10267",
"license:cc-by-4.0",
"region:us"
] | translation | "2025-04-06T23:05:30Z" |
---
language:
- ml
- en
tags:
- translation
license: cc-by-4.0
inference: false
---
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
### HPLT MT release v2.0
This repository contains the Malayalam-English (ml->en) encoder-decoder translation model trained on HPLT v2.0 and OPUS parallel data. The model is currently available in Marian format and we are working on converting it to the Hugging Face format.
### Model Info
* Source language: Malayalam
* Target language: English
* Data: HPLT v2.0 and OPUS parallel data
* Model architecture: Transformer-base
* Tokenizer: SentencePiece (Unigram)
You can check out our [paper](https://arxiv.org/abs/2503.10267), [GitHub repository](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v2.0), or [website](https://hplt-project.org) for more details.
### Usage
The model has been trained with [MarianNMT](https://github.com/marian-nmt/marian) and the weights are in the Marian format.
#### Using Marian
To run inference with MarianNMT, refer to the [Inference/Decoding/Translation](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0#inferencedecodingtranslation) section of our GitHub repository. You will need the model file `model.npz.best-chrf.npz` and the vocabulary file `model.ml-en.spm` from this repository.
#### Using transformers
We are working on this.
### Acknowledgements
This project has received funding from the European Union's Horizon Europe research and innovation programme under grant agreement No 101070350 and from UK Research and Innovation (UKRI) under the UK government's Horizon Europe funding guarantee [grant number 10052546]
### Citation
If you find this model useful, please cite the following paper:
```bibtex
@article{hpltv2,
title={An Expanded Massive Multilingual Dataset for High-Performance Language Technologies},
author={Laurie Burchell and Ona de Gibert and Nikolay Arefyev and Mikko Aulamo and Marta Bañón and Pinzhen Chen and Mariia Fedorova and Liane Guillou and Barry Haddow and Jan Hajič and Jindřich Helcl and Erik Henriksson and Mateusz Klimaszewski and Ville Komulainen and Andrey Kutuzov and Joona Kytöniemi and Veronika Laippala and Petter Mæhlum and Bhavitvya Malik and Farrokh Mehryary and Vladislav Mikhailov and Nikita Moghe and Amanda Myntti and Dayyán O'Brien and Stephan Oepen and Proyag Pal and Jousia Piha and Sampo Pyysalo and Gema Ramírez-Sánchez and David Samuel and Pavel Stepachev and Jörg Tiedemann and Dušan Variš and Tereza Vojtěchová and Jaume Zaragoza-Bernabeu},
journal={arXiv preprint arXiv:2503.10267},
year={2025},
url={https://arxiv.org/abs/2503.10267},
}
```
|
HPLT/translate-en-kn-v2.0-hplt_opus | HPLT | "2025-04-06T23:05:21Z" | 0 | 0 | null | [
"translation",
"en",
"kn",
"arxiv:2503.10267",
"license:cc-by-4.0",
"region:us"
] | translation | "2025-04-06T23:05:06Z" |
---
language:
- en
- kn
tags:
- translation
license: cc-by-4.0
inference: false
---
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
### HPLT MT release v2.0
This repository contains the English-Kannada (en->kn) encoder-decoder translation model trained on HPLT v2.0 and OPUS parallel data. The model is currently available in Marian format and we are working on converting it to the Hugging Face format.
### Model Info
* Source language: English
* Target language: Kannada
* Data: HPLT v2.0 and OPUS parallel data
* Model architecture: Transformer-base
* Tokenizer: SentencePiece (Unigram)
You can check out our [paper](https://arxiv.org/abs/2503.10267), [GitHub repository](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v2.0), or [website](https://hplt-project.org) for more details.
### Usage
The model has been trained with [MarianNMT](https://github.com/marian-nmt/marian) and the weights are in the Marian format.
#### Using Marian
To run inference with MarianNMT, refer to the [Inference/Decoding/Translation](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0#inferencedecodingtranslation) section of our GitHub repository. You will need the model file `model.npz.best-chrf.npz` and the vocabulary file `model.en-kn.spm` from this repository.
#### Using transformers
We are working on this.
### Acknowledgements
This project has received funding from the European Union's Horizon Europe research and innovation programme under grant agreement No 101070350 and from UK Research and Innovation (UKRI) under the UK government's Horizon Europe funding guarantee [grant number 10052546]
### Citation
If you find this model useful, please cite the following paper:
```bibtex
@article{hpltv2,
title={An Expanded Massive Multilingual Dataset for High-Performance Language Technologies},
author={Laurie Burchell and Ona de Gibert and Nikolay Arefyev and Mikko Aulamo and Marta Bañón and Pinzhen Chen and Mariia Fedorova and Liane Guillou and Barry Haddow and Jan Hajič and Jindřich Helcl and Erik Henriksson and Mateusz Klimaszewski and Ville Komulainen and Andrey Kutuzov and Joona Kytöniemi and Veronika Laippala and Petter Mæhlum and Bhavitvya Malik and Farrokh Mehryary and Vladislav Mikhailov and Nikita Moghe and Amanda Myntti and Dayyán O'Brien and Stephan Oepen and Proyag Pal and Jousia Piha and Sampo Pyysalo and Gema Ramírez-Sánchez and David Samuel and Pavel Stepachev and Jörg Tiedemann and Dušan Variš and Tereza Vojtěchová and Jaume Zaragoza-Bernabeu},
journal={arXiv preprint arXiv:2503.10267},
year={2025},
url={https://arxiv.org/abs/2503.10267},
}
```
|
HPLT/translate-kn-en-v2.0-hplt_opus | HPLT | "2025-04-06T23:04:56Z" | 0 | 0 | null | [
"translation",
"kn",
"en",
"arxiv:2503.10267",
"license:cc-by-4.0",
"region:us"
] | translation | "2025-04-06T23:04:43Z" |
---
language:
- kn
- en
tags:
- translation
license: cc-by-4.0
inference: false
---
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
### HPLT MT release v2.0
This repository contains the Kannada-English (kn->en) encoder-decoder translation model trained on HPLT v2.0 and OPUS parallel data. The model is currently available in Marian format and we are working on converting it to the Hugging Face format.
### Model Info
* Source language: Kannada
* Target language: English
* Data: HPLT v2.0 and OPUS parallel data
* Model architecture: Transformer-base
* Tokenizer: SentencePiece (Unigram)
You can check out our [paper](https://arxiv.org/abs/2503.10267), [GitHub repository](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v2.0), or [website](https://hplt-project.org) for more details.
### Usage
The model has been trained with [MarianNMT](https://github.com/marian-nmt/marian) and the weights are in the Marian format.
#### Using Marian
To run inference with MarianNMT, refer to the [Inference/Decoding/Translation](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0#inferencedecodingtranslation) section of our GitHub repository. You will need the model file `model.npz.best-chrf.npz` and the vocabulary file `model.kn-en.spm` from this repository.
#### Using transformers
We are working on this.
### Acknowledgements
This project has received funding from the European Union's Horizon Europe research and innovation programme under grant agreement No 101070350 and from UK Research and Innovation (UKRI) under the UK government's Horizon Europe funding guarantee [grant number 10052546]
### Citation
If you find this model useful, please cite the following paper:
```bibtex
@article{hpltv2,
title={An Expanded Massive Multilingual Dataset for High-Performance Language Technologies},
author={Laurie Burchell and Ona de Gibert and Nikolay Arefyev and Mikko Aulamo and Marta Bañón and Pinzhen Chen and Mariia Fedorova and Liane Guillou and Barry Haddow and Jan Hajič and Jindřich Helcl and Erik Henriksson and Mateusz Klimaszewski and Ville Komulainen and Andrey Kutuzov and Joona Kytöniemi and Veronika Laippala and Petter Mæhlum and Bhavitvya Malik and Farrokh Mehryary and Vladislav Mikhailov and Nikita Moghe and Amanda Myntti and Dayyán O'Brien and Stephan Oepen and Proyag Pal and Jousia Piha and Sampo Pyysalo and Gema Ramírez-Sánchez and David Samuel and Pavel Stepachev and Jörg Tiedemann and Dušan Variš and Tereza Vojtěchová and Jaume Zaragoza-Bernabeu},
journal={arXiv preprint arXiv:2503.10267},
year={2025},
url={https://arxiv.org/abs/2503.10267},
}
```
|
VladikTI/VanGhog_Style | VladikTI | "2025-04-06T23:03:03Z" | 0 | 0 | diffusers | [
"diffusers",
"tensorboard",
"text-to-image",
"diffusers-training",
"lora",
"template:sd-lora",
"stable-diffusion-xl",
"stable-diffusion-xl-diffusers",
"base_model:stabilityai/stable-diffusion-xl-base-1.0",
"base_model:adapter:stabilityai/stable-diffusion-xl-base-1.0",
"license:openrail++",
"region:us"
] | text-to-image | "2025-04-06T23:02:57Z" | ---
base_model: stabilityai/stable-diffusion-xl-base-1.0
library_name: diffusers
license: openrail++
instance_prompt: photo collage in CHERKASHIN style
widget: []
tags:
- text-to-image
- text-to-image
- diffusers-training
- diffusers
- lora
- template:sd-lora
- stable-diffusion-xl
- stable-diffusion-xl-diffusers
---
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# SDXL LoRA DreamBooth - VladikTI/VanGhog_Style
<Gallery />
## Model description
These are VladikTI/VanGhog_Style LoRA adaption weights for stabilityai/stable-diffusion-xl-base-1.0.
The weights were trained using [DreamBooth](https://dreambooth.github.io/).
LoRA for the text encoder was enabled: False.
Special VAE used for training: madebyollin/sdxl-vae-fp16-fix.
## Trigger words
You should use photo collage in CHERKASHIN style to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](VladikTI/VanGhog_Style/tree/main) them in the Files & versions tab.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training details
[TODO: describe the data used to train the model] |
HPLT/translate-en-fa-v2.0-hplt_opus | HPLT | "2025-04-06T23:02:50Z" | 0 | 0 | null | [
"translation",
"en",
"fa",
"arxiv:2503.10267",
"license:cc-by-4.0",
"region:us"
] | translation | "2025-04-06T23:02:35Z" |
---
language:
- en
- fa
tags:
- translation
license: cc-by-4.0
inference: false
---
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
### HPLT MT release v2.0
This repository contains the English-Persian (en->fa) encoder-decoder translation model trained on HPLT v2.0 and OPUS parallel data. The model is currently available in Marian format and we are working on converting it to the Hugging Face format.
### Model Info
* Source language: English
* Target language: Persian
* Data: HPLT v2.0 and OPUS parallel data
* Model architecture: Transformer-base
* Tokenizer: SentencePiece (Unigram)
You can check out our [paper](https://arxiv.org/abs/2503.10267), [GitHub repository](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v2.0), or [website](https://hplt-project.org) for more details.
### Usage
The model has been trained with [MarianNMT](https://github.com/marian-nmt/marian) and the weights are in the Marian format.
#### Using Marian
To run inference with MarianNMT, refer to the [Inference/Decoding/Translation](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0#inferencedecodingtranslation) section of our GitHub repository. You will need the model file `model.npz.best-chrf.npz` and the vocabulary file `model.en-fa.spm` from this repository.
#### Using transformers
We are working on this.
### Acknowledgements
This project has received funding from the European Union's Horizon Europe research and innovation programme under grant agreement No 101070350 and from UK Research and Innovation (UKRI) under the UK government's Horizon Europe funding guarantee [grant number 10052546]
### Citation
If you find this model useful, please cite the following paper:
```bibtex
@article{hpltv2,
title={An Expanded Massive Multilingual Dataset for High-Performance Language Technologies},
author={Laurie Burchell and Ona de Gibert and Nikolay Arefyev and Mikko Aulamo and Marta Bañón and Pinzhen Chen and Mariia Fedorova and Liane Guillou and Barry Haddow and Jan Hajič and Jindřich Helcl and Erik Henriksson and Mateusz Klimaszewski and Ville Komulainen and Andrey Kutuzov and Joona Kytöniemi and Veronika Laippala and Petter Mæhlum and Bhavitvya Malik and Farrokh Mehryary and Vladislav Mikhailov and Nikita Moghe and Amanda Myntti and Dayyán O'Brien and Stephan Oepen and Proyag Pal and Jousia Piha and Sampo Pyysalo and Gema Ramírez-Sánchez and David Samuel and Pavel Stepachev and Jörg Tiedemann and Dušan Variš and Tereza Vojtěchová and Jaume Zaragoza-Bernabeu},
journal={arXiv preprint arXiv:2503.10267},
year={2025},
url={https://arxiv.org/abs/2503.10267},
}
```
|
abharadwaj123/skywork-27b-fine-tuned-0-3 | abharadwaj123 | "2025-04-06T23:01:20Z" | 0 | 0 | transformers | [
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | "2025-04-06T23:01:17Z" | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed] |
HPLT/translate-bn-en-v2.0-hplt_opus | HPLT | "2025-04-06T23:00:40Z" | 0 | 0 | null | [
"translation",
"bn",
"en",
"arxiv:2503.10267",
"license:cc-by-4.0",
"region:us"
] | translation | "2025-04-06T23:00:24Z" |
---
language:
- bn
- en
tags:
- translation
license: cc-by-4.0
inference: false
---
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
### HPLT MT release v2.0
This repository contains the Bangla-English (bn->en) encoder-decoder translation model trained on HPLT v2.0 and OPUS parallel data. The model is currently available in Marian format and we are working on converting it to the Hugging Face format.
### Model Info
* Source language: Bangla
* Target language: English
* Data: HPLT v2.0 and OPUS parallel data
* Model architecture: Transformer-base
* Tokenizer: SentencePiece (Unigram)
You can check out our [paper](https://arxiv.org/abs/2503.10267), [GitHub repository](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v2.0), or [website](https://hplt-project.org) for more details.
### Usage
The model has been trained with [MarianNMT](https://github.com/marian-nmt/marian) and the weights are in the Marian format.
#### Using Marian
To run inference with MarianNMT, refer to the [Inference/Decoding/Translation](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0#inferencedecodingtranslation) section of our GitHub repository. You will need the model file `model.npz.best-chrf.npz` and the vocabulary file `model.bn-en.spm` from this repository.
#### Using transformers
We are working on this.
### Acknowledgements
This project has received funding from the European Union's Horizon Europe research and innovation programme under grant agreement No 101070350 and from UK Research and Innovation (UKRI) under the UK government's Horizon Europe funding guarantee [grant number 10052546]
### Citation
If you find this model useful, please cite the following paper:
```bibtex
@article{hpltv2,
title={An Expanded Massive Multilingual Dataset for High-Performance Language Technologies},
author={Laurie Burchell and Ona de Gibert and Nikolay Arefyev and Mikko Aulamo and Marta Bañón and Pinzhen Chen and Mariia Fedorova and Liane Guillou and Barry Haddow and Jan Hajič and Jindřich Helcl and Erik Henriksson and Mateusz Klimaszewski and Ville Komulainen and Andrey Kutuzov and Joona Kytöniemi and Veronika Laippala and Petter Mæhlum and Bhavitvya Malik and Farrokh Mehryary and Vladislav Mikhailov and Nikita Moghe and Amanda Myntti and Dayyán O'Brien and Stephan Oepen and Proyag Pal and Jousia Piha and Sampo Pyysalo and Gema Ramírez-Sánchez and David Samuel and Pavel Stepachev and Jörg Tiedemann and Dušan Variš and Tereza Vojtěchová and Jaume Zaragoza-Bernabeu},
journal={arXiv preprint arXiv:2503.10267},
year={2025},
url={https://arxiv.org/abs/2503.10267},
}
```
|
mradermacher/Scrivener-Base-V6-LLaMA-70B-i1-GGUF | mradermacher | "2025-04-06T23:00:27Z" | 0 | 0 | transformers | [
"transformers",
"gguf",
"mergekit",
"merge",
"en",
"base_model:TareksLab/Scrivener-Base-V6-LLaMA-70B",
"base_model:quantized:TareksLab/Scrivener-Base-V6-LLaMA-70B",
"endpoints_compatible",
"region:us",
"imatrix",
"conversational"
] | null | "2025-04-06T10:01:14Z" | ---
base_model: TareksLab/Scrivener-Base-V6-LLaMA-70B
language:
- en
library_name: transformers
quantized_by: mradermacher
tags:
- mergekit
- merge
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: nicoboss -->
weighted/imatrix quants of https://huggingface.co/TareksLab/Scrivener-Base-V6-LLaMA-70B
<!-- provided-files -->
static quants are available at https://huggingface.co/mradermacher/Scrivener-Base-V6-LLaMA-70B-GGUF
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Scrivener-Base-V6-LLaMA-70B-i1-GGUF/resolve/main/Scrivener-Base-V6-LLaMA-70B.i1-IQ1_S.gguf) | i1-IQ1_S | 15.4 | for the desperate |
| [GGUF](https://huggingface.co/mradermacher/Scrivener-Base-V6-LLaMA-70B-i1-GGUF/resolve/main/Scrivener-Base-V6-LLaMA-70B.i1-IQ1_M.gguf) | i1-IQ1_M | 16.9 | mostly desperate |
| [GGUF](https://huggingface.co/mradermacher/Scrivener-Base-V6-LLaMA-70B-i1-GGUF/resolve/main/Scrivener-Base-V6-LLaMA-70B.i1-IQ2_XXS.gguf) | i1-IQ2_XXS | 19.2 | |
| [GGUF](https://huggingface.co/mradermacher/Scrivener-Base-V6-LLaMA-70B-i1-GGUF/resolve/main/Scrivener-Base-V6-LLaMA-70B.i1-IQ2_XS.gguf) | i1-IQ2_XS | 21.2 | |
| [GGUF](https://huggingface.co/mradermacher/Scrivener-Base-V6-LLaMA-70B-i1-GGUF/resolve/main/Scrivener-Base-V6-LLaMA-70B.i1-IQ2_S.gguf) | i1-IQ2_S | 22.3 | |
| [GGUF](https://huggingface.co/mradermacher/Scrivener-Base-V6-LLaMA-70B-i1-GGUF/resolve/main/Scrivener-Base-V6-LLaMA-70B.i1-IQ2_M.gguf) | i1-IQ2_M | 24.2 | |
| [GGUF](https://huggingface.co/mradermacher/Scrivener-Base-V6-LLaMA-70B-i1-GGUF/resolve/main/Scrivener-Base-V6-LLaMA-70B.i1-Q2_K_S.gguf) | i1-Q2_K_S | 24.6 | very low quality |
| [GGUF](https://huggingface.co/mradermacher/Scrivener-Base-V6-LLaMA-70B-i1-GGUF/resolve/main/Scrivener-Base-V6-LLaMA-70B.i1-Q2_K.gguf) | i1-Q2_K | 26.5 | IQ3_XXS probably better |
| [GGUF](https://huggingface.co/mradermacher/Scrivener-Base-V6-LLaMA-70B-i1-GGUF/resolve/main/Scrivener-Base-V6-LLaMA-70B.i1-IQ3_XXS.gguf) | i1-IQ3_XXS | 27.6 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Scrivener-Base-V6-LLaMA-70B-i1-GGUF/resolve/main/Scrivener-Base-V6-LLaMA-70B.i1-IQ3_XS.gguf) | i1-IQ3_XS | 29.4 | |
| [GGUF](https://huggingface.co/mradermacher/Scrivener-Base-V6-LLaMA-70B-i1-GGUF/resolve/main/Scrivener-Base-V6-LLaMA-70B.i1-IQ3_S.gguf) | i1-IQ3_S | 31.0 | beats Q3_K* |
| [GGUF](https://huggingface.co/mradermacher/Scrivener-Base-V6-LLaMA-70B-i1-GGUF/resolve/main/Scrivener-Base-V6-LLaMA-70B.i1-Q3_K_S.gguf) | i1-Q3_K_S | 31.0 | IQ3_XS probably better |
| [GGUF](https://huggingface.co/mradermacher/Scrivener-Base-V6-LLaMA-70B-i1-GGUF/resolve/main/Scrivener-Base-V6-LLaMA-70B.i1-IQ3_M.gguf) | i1-IQ3_M | 32.0 | |
| [GGUF](https://huggingface.co/mradermacher/Scrivener-Base-V6-LLaMA-70B-i1-GGUF/resolve/main/Scrivener-Base-V6-LLaMA-70B.i1-Q3_K_M.gguf) | i1-Q3_K_M | 34.4 | IQ3_S probably better |
| [GGUF](https://huggingface.co/mradermacher/Scrivener-Base-V6-LLaMA-70B-i1-GGUF/resolve/main/Scrivener-Base-V6-LLaMA-70B.i1-Q3_K_L.gguf) | i1-Q3_K_L | 37.2 | IQ3_M probably better |
| [GGUF](https://huggingface.co/mradermacher/Scrivener-Base-V6-LLaMA-70B-i1-GGUF/resolve/main/Scrivener-Base-V6-LLaMA-70B.i1-IQ4_XS.gguf) | i1-IQ4_XS | 38.0 | |
| [GGUF](https://huggingface.co/mradermacher/Scrivener-Base-V6-LLaMA-70B-i1-GGUF/resolve/main/Scrivener-Base-V6-LLaMA-70B.i1-Q4_0.gguf) | i1-Q4_0 | 40.2 | fast, low quality |
| [GGUF](https://huggingface.co/mradermacher/Scrivener-Base-V6-LLaMA-70B-i1-GGUF/resolve/main/Scrivener-Base-V6-LLaMA-70B.i1-Q4_K_S.gguf) | i1-Q4_K_S | 40.4 | optimal size/speed/quality |
| [GGUF](https://huggingface.co/mradermacher/Scrivener-Base-V6-LLaMA-70B-i1-GGUF/resolve/main/Scrivener-Base-V6-LLaMA-70B.i1-Q4_K_M.gguf) | i1-Q4_K_M | 42.6 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Scrivener-Base-V6-LLaMA-70B-i1-GGUF/resolve/main/Scrivener-Base-V6-LLaMA-70B.i1-Q4_1.gguf) | i1-Q4_1 | 44.4 | |
| [GGUF](https://huggingface.co/mradermacher/Scrivener-Base-V6-LLaMA-70B-i1-GGUF/resolve/main/Scrivener-Base-V6-LLaMA-70B.i1-Q5_K_S.gguf) | i1-Q5_K_S | 48.8 | |
| [GGUF](https://huggingface.co/mradermacher/Scrivener-Base-V6-LLaMA-70B-i1-GGUF/resolve/main/Scrivener-Base-V6-LLaMA-70B.i1-Q5_K_M.gguf) | i1-Q5_K_M | 50.0 | |
| [PART 1](https://huggingface.co/mradermacher/Scrivener-Base-V6-LLaMA-70B-i1-GGUF/resolve/main/Scrivener-Base-V6-LLaMA-70B.i1-Q6_K.gguf.part1of2) [PART 2](https://huggingface.co/mradermacher/Scrivener-Base-V6-LLaMA-70B-i1-GGUF/resolve/main/Scrivener-Base-V6-LLaMA-70B.i1-Q6_K.gguf.part2of2) | i1-Q6_K | 58.0 | practically like static Q6_K |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time. Additional thanks to [@nicoboss](https://huggingface.co/nicoboss) for giving me access to his private supercomputer, enabling me to provide many more imatrix quants, at much higher quality, than I would otherwise be able to.
<!-- end -->
|
HPLT/translate-en-xh-v2.0-hplt | HPLT | "2025-04-06T22:59:57Z" | 0 | 0 | null | [
"translation",
"en",
"xh",
"arxiv:2503.10267",
"license:cc-by-4.0",
"region:us"
] | translation | "2025-04-06T22:59:41Z" |
---
language:
- en
- xh
tags:
- translation
license: cc-by-4.0
inference: false
---
<img src="https://hplt-project.org/_next/static/media/logo-hplt.d5e16ca5.svg" width=12.5%>
### HPLT MT release v2.0
This repository contains the English-Xhosa (en->xh) encoder-decoder translation model trained on HPLT v2.0 parallel data. The model is currently available in Marian format and we are working on converting it to the Hugging Face format.
### Model Info
* Source language: English
* Target language: Xhosa
* Data: HPLT v2.0 parallel data
* Model architecture: Transformer-base
* Tokenizer: SentencePiece (Unigram)
You can check out our [paper](https://arxiv.org/abs/2503.10267), [GitHub repository](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v2.0), or [website](https://hplt-project.org) for more details.
### Usage
The model has been trained with [MarianNMT](https://github.com/marian-nmt/marian) and the weights are in the Marian format.
#### Using Marian
To run inference with MarianNMT, refer to the [Inference/Decoding/Translation](https://github.com/hplt-project/HPLT-MT-Models/tree/main/v1.0#inferencedecodingtranslation) section of our GitHub repository. You will need the model file `model.npz.best-chrf.npz` and the vocabulary file `model.en-xh.spm` from this repository.
#### Using transformers
We are working on this.
### Acknowledgements
This project has received funding from the European Union's Horizon Europe research and innovation programme under grant agreement No 101070350 and from UK Research and Innovation (UKRI) under the UK government's Horizon Europe funding guarantee [grant number 10052546]
### Citation
If you find this model useful, please cite the following paper:
```bibtex
@article{hpltv2,
title={An Expanded Massive Multilingual Dataset for High-Performance Language Technologies},
author={Laurie Burchell and Ona de Gibert and Nikolay Arefyev and Mikko Aulamo and Marta Bañón and Pinzhen Chen and Mariia Fedorova and Liane Guillou and Barry Haddow and Jan Hajič and Jindřich Helcl and Erik Henriksson and Mateusz Klimaszewski and Ville Komulainen and Andrey Kutuzov and Joona Kytöniemi and Veronika Laippala and Petter Mæhlum and Bhavitvya Malik and Farrokh Mehryary and Vladislav Mikhailov and Nikita Moghe and Amanda Myntti and Dayyán O'Brien and Stephan Oepen and Proyag Pal and Jousia Piha and Sampo Pyysalo and Gema Ramírez-Sánchez and David Samuel and Pavel Stepachev and Jörg Tiedemann and Dušan Variš and Tereza Vojtěchová and Jaume Zaragoza-Bernabeu},
journal={arXiv preprint arXiv:2503.10267},
year={2025},
url={https://arxiv.org/abs/2503.10267},
}
```
|
Subsets and Splits