|
--- |
|
license: gemma |
|
base_model: |
|
- google/gemma-3-27b-it |
|
- nidum/Nidum-gemma-3-27B-it-Uncensored |
|
pipeline_tag: image-text-to-text |
|
tags: |
|
- chat |
|
- mlx |
|
- uncensored |
|
- apple |
|
- 6bit |
|
library_name: mlx |
|
--- |
|
# Gemma-3-27B Instruct Uncensored 6-bit MLX |
|
Uncensored version of **Gemma 3 27B**. |
|
|
|
This model was converted to MLX format from [`nidum/Nidum-gemma-3-27B-it-Uncensored`]() using mlx-vlm version **0.1.19**. |
|
|
|
Refer to the [original model card](https://huggingface.co/google/gemma-3-27b-it) and [uncensored model](https://huggingface.co/nidum/Nidum-gemma-3-27B-it-Uncensored) for more details on the model. |
|
|
|
|
|
## Technical Details |
|
|
|
Supports a context length of 128k tokens, with a max output of 8192. |
|
|
|
Multimodal supporting images normalized to 896 x 896 resolution. |
|
|
|
|
|
## Use with mlx |
|
|
|
```bash |
|
pip install -U mlx-vlm |
|
``` |
|
|
|
```bash |
|
python -m mlx_vlm.generate --model TheCluster/gemma-3-27b-it-uncensored-6bit --max-tokens 128 --temperature 0.0 --prompt "Describe this image." --image <path_to_image> |
|
``` |