--- base_model: google/gemma-3-4b-it library_name: transformers license: gemma pipeline_tag: image-text-to-text tags: - llama-cpp - matrixportal extra_gated_heading: Access Gemma on Hugging Face extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage license. To do this, please ensure you’re logged in to Hugging Face and click below. Requests are processed immediately. extra_gated_button_content: Acknowledge license --- - **Base model:** [google/gemma-3-4b-it](https://huggingface.co/google/gemma-3-4b-it) - **License:** gemma Quantized with llama.cpp using [all-gguf-same-where](https://huggingface.co/spaces/matrixportal/all-gguf-same-where) --- ## 🌍 Türkçe Dil Destekli Modeller - Performans Analizi ### 📌 Test Edilen Modeller Aşağıdaki yerel modeller, **Türkçe metin üretimi ve anlama** yetenekleri açısından kapsamlı şekilde test edilmiştir: 1. `CohereForAI/c4ai-command-r7b-arabic-02-2025` 2. `CohereForAI/c4ai-command-r7b-12-2024` 3. `CohereForAI/aya-expanse-8b` 4. `ytu-ce-cosmos/Turkish-Llama-8b-Instruct-v0.1` 5. `google/gemma-3-4b-it` 6. `google/gemma-3-4b-pt-qat-q4_0-gguf` ### 🔍 Test Kriterleri Modeller şu 3 temel soruyla değerlendirilmiştir: ```markdown 1. "Türkiye'deki en önemli 3 tarihi eseri ve kültürel önemini açıklar mısın?" 2. "İstanbul'un Asya-Avrupa yakası farklarını İngilizce/Türkçe karşılaştır" 3. "Antalya'da 3 günlük turist planı hazırla" ``` ### 📊 Öne Çıkan Sonuçlar | Model | Kültürel Doğruluk | Dil Akıcılığı | Çok Dillilik | En Güçlü Yönü | |------------------------|------------------|--------------|--------------|-----------------------------------| | Turkish-Llama-8B | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐⭐ | ⭐ | Metaforik anlatım ve yerel bağlam | | Aya-Expanse-8B | ⭐⭐⭐⭐ | ⭐⭐⭐⭐ | ⭐⭐⭐⭐⭐ | 20+ dil desteği | | Command R7B Arabic | ⭐⭐⭐ | ⭐⭐⭐ | ⭐⭐⭐ | Arapça-Türkçe çeviri | ### 🔗 Tam Test Sonuçları Tüm model yanıtlarını detaylı incelemek için: [📂 Türkçe Destekli Model Karşılaştırmaları](https://huggingface.co/matrixportal/Turkce-Destekli-Model-Karsilastirmalari) ### 💡 Kullanıcı Rehberi ```markdown - 🔥 **Türkçe Öncelikli Projeler** → `Turkish-Llama-8B` (En yüksek kültürel uyum) - 🌐 **Çok Dilli Uygulamalar** → `Aya-Expanse-8B` - 📜 **Resmi Dokümanlar** → `Command R7B` serisi - 📱 **Düşük Kaynak** → `Gemma-3-4B` (Android uyumlu) ``` ### ⚠️ Önemli Notlar - Tüm modeller **GGUF kuantize** formatında test edilmiştir - Coğrafi isimlerde %5-10 hata payı olabilir - Yerel kullanım için **Ollama/LM Studio** önerilir --- ### 🌟 Kişisel Deneyim "Yerel modellerle yaptığım testlerde özellikle **Turkish-Llama-8B**'nin Türkçe'yi doğal kullanımı ve kültürel bağlam hakimiyeti beni etkiledi. Siz de kendi projelerinizde bu modelleri deneyerek sonuçları [AnkaNLP](https://huggingface.co/AnkaNLP) sayfamızda paylaşabilirsiniz!" --- ## ✅ Quantized Models Download List ### 🔍 Recommended Quantizations - **✨ General CPU Use:** [`Q4_K_M`](https://huggingface.co/matrixportal/gemma-3-4b-it-GGUF/resolve/main/gemma-3-4b-it-q4_k_m.gguf) (Best balance of speed/quality) - **📱 ARM Devices:** [`Q4_0`](https://huggingface.co/matrixportal/gemma-3-4b-it-GGUF/resolve/main/gemma-3-4b-it-q4_0.gguf) (Optimized for ARM CPUs) - **🏆 Maximum Quality:** [`Q8_0`](https://huggingface.co/matrixportal/gemma-3-4b-it-GGUF/resolve/main/gemma-3-4b-it-q8_0.gguf) (Near-original quality) ### 📦 Full Quantization Options | 🚀 Download | 🔢 Type | 📝 Notes | |:---------|:-----|:------| | [Download](https://huggingface.co/matrixportal/gemma-3-4b-it-GGUF/resolve/main/gemma-3-4b-it-q2_k.gguf) | ![Q2_K](https://img.shields.io/badge/Q2_K-1A73E8) | Basic quantization | | [Download](https://huggingface.co/matrixportal/gemma-3-4b-it-GGUF/resolve/main/gemma-3-4b-it-q3_k_s.gguf) | ![Q3_K_S](https://img.shields.io/badge/Q3_K_S-34A853) | Small size | | [Download](https://huggingface.co/matrixportal/gemma-3-4b-it-GGUF/resolve/main/gemma-3-4b-it-q3_k_m.gguf) | ![Q3_K_M](https://img.shields.io/badge/Q3_K_M-FBBC05) | Balanced quality | | [Download](https://huggingface.co/matrixportal/gemma-3-4b-it-GGUF/resolve/main/gemma-3-4b-it-q3_k_l.gguf) | ![Q3_K_L](https://img.shields.io/badge/Q3_K_L-4285F4) | Better quality | | [Download](https://huggingface.co/matrixportal/gemma-3-4b-it-GGUF/resolve/main/gemma-3-4b-it-q4_0.gguf) | ![Q4_0](https://img.shields.io/badge/Q4_0-EA4335) | Fast on ARM | | [Download](https://huggingface.co/matrixportal/gemma-3-4b-it-GGUF/resolve/main/gemma-3-4b-it-q4_k_s.gguf) | ![Q4_K_S](https://img.shields.io/badge/Q4_K_S-673AB7) | Fast, recommended | | [Download](https://huggingface.co/matrixportal/gemma-3-4b-it-GGUF/resolve/main/gemma-3-4b-it-q4_k_m.gguf) | ![Q4_K_M](https://img.shields.io/badge/Q4_K_M-673AB7) ⭐ | Best balance | | [Download](https://huggingface.co/matrixportal/gemma-3-4b-it-GGUF/resolve/main/gemma-3-4b-it-q5_0.gguf) | ![Q5_0](https://img.shields.io/badge/Q5_0-FF6D01) | Good quality | | [Download](https://huggingface.co/matrixportal/gemma-3-4b-it-GGUF/resolve/main/gemma-3-4b-it-q5_k_s.gguf) | ![Q5_K_S](https://img.shields.io/badge/Q5_K_S-0F9D58) | Balanced | | [Download](https://huggingface.co/matrixportal/gemma-3-4b-it-GGUF/resolve/main/gemma-3-4b-it-q5_k_m.gguf) | ![Q5_K_M](https://img.shields.io/badge/Q5_K_M-0F9D58) | High quality | | [Download](https://huggingface.co/matrixportal/gemma-3-4b-it-GGUF/resolve/main/gemma-3-4b-it-q6_k.gguf) | ![Q6_K](https://img.shields.io/badge/Q6_K-4285F4) 🏆 | Very good quality | | [Download](https://huggingface.co/matrixportal/gemma-3-4b-it-GGUF/resolve/main/gemma-3-4b-it-q8_0.gguf) | ![Q8_0](https://img.shields.io/badge/Q8_0-EA4335) ⚡ | Fast, best quality | | [Download](https://huggingface.co/matrixportal/gemma-3-4b-it-GGUF/resolve/main/gemma-3-4b-it-f16.gguf) | ![F16](https://img.shields.io/badge/F16-000000) | Maximum accuracy | 💡 **Tip:** Use `F16` for maximum precision when quality is critical --- # 🚀 Applications and Tools for Locally Quantized LLMs ## 🖥️ Desktop Applications | Application | Description | Download Link | |-----------------|----------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------| | **Llama.cpp** | A fast and efficient inference engine for GGUF models. | [GitHub Repository](https://github.com/ggml-org/llama.cpp) | | **Ollama** | A streamlined solution for running LLMs locally. | [Website](https://ollama.com/) | | **AnythingLLM** | An AI-powered knowledge management tool. | [GitHub Repository](https://github.com/Mintplex-Labs/anything-llm) | | **Open WebUI** | A user-friendly web interface for running local LLMs. | [GitHub Repository](https://github.com/open-webui/open-webui) | | **GPT4All** | A user-friendly desktop application supporting various LLMs, compatible with GGUF models. | [GitHub Repository](https://github.com/nomic-ai/gpt4all) | | **LM Studio** | A desktop application designed to run and manage local LLMs, supporting GGUF format. | [Website](https://lmstudio.ai/) | | **GPT4All Chat**| A chat application compatible with GGUF models for local, offline interactions. | [GitHub Repository](https://github.com/nomic-ai/gpt4all) | --- ## 📱 Mobile Applications | Application | Description | Download Link | |-------------------|----------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------| | **ChatterUI** | A simple and lightweight LLM app for mobile devices. | [GitHub Repository](https://github.com/Vali-98/ChatterUI) | | **Maid** | Mobile Artificial Intelligence Distribution for running AI models on mobile devices. | [GitHub Repository](https://github.com/Mobile-Artificial-Intelligence/maid) | | **PocketPal AI** | A mobile AI assistant powered by local models. | [GitHub Repository](https://github.com/a-ghorbani/pocketpal-ai) | | **Layla** | A flexible platform for running various AI models on mobile devices. | [Website](https://www.layla-network.ai/) | --- ## 🎨 Image Generation Applications | Application | Description | Download Link | |-------------------------------------|----------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------| | **Stable Diffusion** | An open-source AI model for generating images from text. | [GitHub Repository](https://github.com/CompVis/stable-diffusion) | | **Stable Diffusion WebUI** | A web application providing access to Stable Diffusion models via a browser interface. | [GitHub Repository](https://github.com/AUTOMATIC1111/stable-diffusion-webui) | | **Local Dream** | Android Stable Diffusion with Snapdragon NPU acceleration. Also supports CPU inference. | [GitHub Repository](https://github.com/xororz/local-dream) | | **Stable-Diffusion-Android (SDAI)** | An open-source AI art application for Android devices, enabling digital art creation. | [GitHub Repository](https://github.com/ShiftHackZ/Stable-Diffusion-Android) | ---