TheCluster commited on
Commit
a0467c8
·
verified ·
1 Parent(s): 3183550

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -6
README.md CHANGED
@@ -3,23 +3,30 @@ license: gemma
3
  base_model:
4
  - google/gemma-3-27b-it
5
  - nidum/Nidum-gemma-3-27B-it-Uncensored
 
6
  pipeline_tag: image-text-to-text
7
  tags:
8
  - chat
9
  - mlx
10
  - uncensored
 
11
  - apple
12
  - 4bit
 
 
 
 
 
 
 
 
13
  library_name: mlx
14
  ---
15
 
16
  # Gemma-3-27B Instruct Uncensored 4-bit MLX
17
  Uncensored version of **Gemma 3 27B**.
18
 
19
- This model was converted to MLX format from [`nidum/Nidum-gemma-3-27B-it-Uncensored`]() using mlx-vlm version **0.1.19**.
20
-
21
- Refer to the [original model card](https://huggingface.co/google/gemma-3-27b-it) and [uncensored model](https://huggingface.co/nidum/Nidum-gemma-3-27B-it-Uncensored) for more details on the model.
22
-
23
 
24
  ## Technical Details
25
 
@@ -27,6 +34,8 @@ Supports a context length of 128k tokens, with a max output of 8192.
27
 
28
  Multimodal supporting images normalized to 896 x 896 resolution.
29
 
 
 
30
 
31
  ## Use with mlx
32
 
@@ -35,5 +44,8 @@ pip install -U mlx-vlm
35
  ```
36
 
37
  ```bash
38
- python -m mlx_vlm.generate --model TheCluster/gemma-3-27b-it-uncensored-mlx-4bit --max-tokens 128 --temperature 0.0 --prompt "Describe this image." --image <path_to_image>
39
- ```
 
 
 
 
3
  base_model:
4
  - google/gemma-3-27b-it
5
  - nidum/Nidum-gemma-3-27B-it-Uncensored
6
+ base_model_relation: quantized
7
  pipeline_tag: image-text-to-text
8
  tags:
9
  - chat
10
  - mlx
11
  - uncensored
12
+ - gemma3
13
  - apple
14
  - 4bit
15
+ language:
16
+ - en
17
+ - fr
18
+ - es
19
+ - de
20
+ - it
21
+ - hi
22
+ - ru
23
  library_name: mlx
24
  ---
25
 
26
  # Gemma-3-27B Instruct Uncensored 4-bit MLX
27
  Uncensored version of **Gemma 3 27B**.
28
 
29
+ Also you can try new uncensored version: [Amoral Gemma-3 27B 4-bit MLX](https://huggingface.co/TheCluster/amoral-gemma-3-27B-v2-mlx-4bit)
 
 
 
30
 
31
  ## Technical Details
32
 
 
34
 
35
  Multimodal supporting images normalized to 896 x 896 resolution.
36
 
37
+ Refer to the [original model card](https://huggingface.co/google/gemma-3-27b-it) and [uncensored model](https://huggingface.co/nidum/Nidum-gemma-3-27B-it-Uncensored) for more details on the model.
38
+
39
 
40
  ## Use with mlx
41
 
 
44
  ```
45
 
46
  ```bash
47
+ python -m mlx_vlm.generate --model TheCluster/gemma-3-27b-it-uncensored-mlx-4bit --max-tokens 256 --temperature 0.4 --prompt "Describe this image." --image <path_to_image>
48
+ ```
49
+
50
+ ### Source
51
+ This model was converted to MLX format from [`nidum/Nidum-gemma-3-27B-it-Uncensored`](https://huggingface.co/nidum/Nidum-Gemma-3-27B-it-Uncensored) using mlx-vlm version **0.1.19**.