--- license: apache-2.0 base_model: - TIGER-Lab/VL-Rethinker-7B base_model_relation: quantized pipeline_tag: image-text-to-text tags: - chat - mlx - apple - 4bit - multimodal language: - en library_name: mlx --- # VL-Rethinker-7B 4-bit MLX This model was converted to MLX format from [`TIGER-Lab/VL-Rethinker-7B`](https://huggingface.co/TIGER-Lab/VL-Rethinker-7B) using mlx-vlm version **0.1.23**. Refer to the [original model card](https://huggingface.co/TIGER-Lab/VL-Rethinker-7B) for more details on the model. ### Important! If you use LM Studio, do not update the MLX runtime to the latest version. The latest MLX runtime (0.13.1) has a bug, and the model crashes when you send images to it. Use previous mlx runtime **0.12.1**. ## Use with mlx ```bash pip install -U mlx-vlm ``` ```bash python -m mlx_vlm.generate --model TheCluster/VL-Rethinker-7B-mlx-4bit --max-tokens 512 --temperature 0.0 --prompt "Describe this image." --image ```