TheCluster commited on
Commit
ecc2378
·
verified ·
1 Parent(s): de83fe9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -6
README.md CHANGED
@@ -3,22 +3,29 @@ license: gemma
3
  base_model:
4
  - google/gemma-3-27b-it
5
  - nidum/Nidum-gemma-3-27B-it-Uncensored
 
6
  pipeline_tag: image-text-to-text
7
  tags:
8
  - chat
9
  - mlx
10
  - uncensored
 
11
  - apple
12
  - 6bit
 
 
 
 
 
 
 
 
13
  library_name: mlx
14
  ---
15
  # Gemma-3-27B Instruct Uncensored 6-bit MLX
16
  Uncensored version of **Gemma 3 27B**.
17
 
18
- This model was converted to MLX format from [`nidum/Nidum-gemma-3-27B-it-Uncensored`]() using mlx-vlm version **0.1.19**.
19
-
20
- Refer to the [original model card](https://huggingface.co/google/gemma-3-27b-it) and [uncensored model](https://huggingface.co/nidum/Nidum-gemma-3-27B-it-Uncensored) for more details on the model.
21
-
22
 
23
  ## Technical Details
24
 
@@ -26,6 +33,8 @@ Supports a context length of 128k tokens, with a max output of 8192.
26
 
27
  Multimodal supporting images normalized to 896 x 896 resolution.
28
 
 
 
29
 
30
  ## Use with mlx
31
 
@@ -34,5 +43,8 @@ pip install -U mlx-vlm
34
  ```
35
 
36
  ```bash
37
- python -m mlx_vlm.generate --model TheCluster/gemma-3-27b-it-uncensored-mlx-6bit --max-tokens 128 --temperature 0.0 --prompt "Describe this image." --image <path_to_image>
38
- ```
 
 
 
 
3
  base_model:
4
  - google/gemma-3-27b-it
5
  - nidum/Nidum-gemma-3-27B-it-Uncensored
6
+ base_model_relation: quantized
7
  pipeline_tag: image-text-to-text
8
  tags:
9
  - chat
10
  - mlx
11
  - uncensored
12
+ - gemma3
13
  - apple
14
  - 6bit
15
+ language:
16
+ - en
17
+ - fr
18
+ - es
19
+ - de
20
+ - it
21
+ - hi
22
+ - ru
23
  library_name: mlx
24
  ---
25
  # Gemma-3-27B Instruct Uncensored 6-bit MLX
26
  Uncensored version of **Gemma 3 27B**.
27
 
28
+ Also you can try new uncensored version: [Amoral Gemma-3 27B 6-bit MLX](https://huggingface.co/TheCluster/amoral-gemma-3-27B-v2-mlx-6bit)
 
 
 
29
 
30
  ## Technical Details
31
 
 
33
 
34
  Multimodal supporting images normalized to 896 x 896 resolution.
35
 
36
+ Refer to the [original model card](https://huggingface.co/google/gemma-3-27b-it) and [uncensored model](https://huggingface.co/nidum/Nidum-gemma-3-27B-it-Uncensored) for more details on the model.
37
+
38
 
39
  ## Use with mlx
40
 
 
43
  ```
44
 
45
  ```bash
46
+ python -m mlx_vlm.generate --model TheCluster/gemma-3-27b-it-uncensored-mlx-6bit --max-tokens 256 --temperature 0.4 --prompt "Describe this image." --image <path_to_image>
47
+ ```
48
+
49
+ ### Source
50
+ This model was converted to MLX format from [`nidum/Nidum-gemma-3-27B-it-Uncensored`](https://huggingface.co/nidum/Nidum-Gemma-3-27B-it-Uncensored) using mlx-vlm version **0.1.19**.