|
--- |
|
license: gemma |
|
base_model: |
|
- google/gemma-3-27b-it |
|
- nidum/Nidum-gemma-3-27B-it-Uncensored |
|
base_model_relation: quantized |
|
pipeline_tag: image-text-to-text |
|
tags: |
|
- chat |
|
- mlx |
|
- uncensored |
|
- gemma3 |
|
- apple |
|
- 6bit |
|
language: |
|
- en |
|
- fr |
|
- es |
|
- de |
|
- it |
|
- hi |
|
- ru |
|
library_name: mlx |
|
--- |
|
# Gemma-3-27B Instruct Uncensored 6-bit MLX |
|
Uncensored version of **Gemma 3 27B**. |
|
|
|
Also you can try new uncensored version: [Amoral Gemma-3 27B 6-bit MLX](https://huggingface.co/TheCluster/amoral-gemma-3-27B-v2-mlx-6bit) |
|
|
|
## Technical Details |
|
|
|
Supports a context length of 128k tokens, with a max output of 8192. |
|
|
|
Multimodal supporting images normalized to 896 x 896 resolution. |
|
|
|
Refer to the [original model card](https://huggingface.co/google/gemma-3-27b-it) and [uncensored model](https://huggingface.co/nidum/Nidum-gemma-3-27B-it-Uncensored) for more details on the model. |
|
|
|
|
|
## Use with mlx |
|
|
|
```bash |
|
pip install -U mlx-vlm |
|
``` |
|
|
|
```bash |
|
python -m mlx_vlm.generate --model TheCluster/gemma-3-27b-it-uncensored-mlx-6bit --max-tokens 256 --temperature 0.4 --prompt "Describe this image." --image <path_to_image> |
|
``` |
|
|
|
### Source |
|
This model was converted to MLX format from [`nidum/Nidum-gemma-3-27B-it-Uncensored`](https://huggingface.co/nidum/Nidum-Gemma-3-27B-it-Uncensored) using mlx-vlm version **0.1.19**. |
|
|