--- language: - en - bg license: cc-by-4.0 size_categories: - 100Kuser\n{task}\nmodel\n{output}" texts.append(text) return {"text": texts} dataset = dataset.map(formatting_prompts_func, batched=True) ``` ## 🎯 Use Cases - **Fine-tuning Gemma3-270m** for English β†’ Bulgarian translation - **Training instruction-following translation models** - **Subtitle translation systems** - **Cross-lingual dialogue systems** - **Educational translation tools** ## πŸ” Data Quality The dataset has been carefully processed to ensure high quality: - βœ… **HTML/URL Removal**: Cleaned of HTML tags and URLs - βœ… **Length Filtering**: Sentences between 2-50 words - βœ… **Content Filtering**: Removed lines with only numbers/punctuation - βœ… **Encoding**: Proper UTF-8 encoding for Cyrillic text - βœ… **Format Validation**: 100% ChessInstruct-compatible format - βœ… **Zero Conversion Errors**: All 45,313 examples converted successfully ## πŸ› οΈ Technical Details ### Why ChessInstruct Format? This dataset uses the exact same format as the successful [Thytu/ChessInstruct](https://huggingface.co/datasets/Thytu/ChessInstruct) dataset, which is proven to work perfectly with Unsloth. The format provides: - Clear separation of instruction (`task`) and expected response (`expected_output`) - Consistent structure that Unsloth can reliably parse - Task categorization through the `KIND` field - Direct compatibility with existing Unsloth templates ### Recommended Fine-tuning Settings ```python from trl import SFTTrainer, SFTConfig trainer = SFTTrainer( model=model, tokenizer=tokenizer, train_dataset=dataset, args=SFTConfig( dataset_text_field="text", per_device_train_batch_size=8, gradient_accumulation_steps=1, warmup_steps=5, max_steps=100, # Adjust based on your needs learning_rate=5e-5, logging_steps=1, optim="adamw_8bit", weight_decay=0.01, lr_scheduler_type="linear", seed=3407, output_dir="outputs", ), ) ``` ### Hardware Requirements - **Minimum**: CPU-only training (2-4 hours on modern CPU) - **Recommended**: Tesla T4 GPU in Google Colab (15-30 minutes) - **Memory**: 4-8GB RAM, 2-4GB VRAM ## πŸ“š Example Usage ```python # Example translation prompt sample = dataset["train"][0] print("Task prompt:") print(sample["task"]) print("\nExpected translation:") print(sample["expected_output"]) print("\nTask category:") print(sample["KIND"]) # Output: # Task prompt: # Translate the following English text to Bulgarian: # # English: Hello, how are you? # Bulgarian: # # Expected translation: # Π—Π΄Ρ€Π°Π²Π΅ΠΉ, ΠΊΠ°ΠΊ си? # # Task category: # TRANSLATION_EN_BG ``` ## πŸ”— Related Resources - **Unsloth Framework**: [https://github.com/unslothai/unsloth](https://github.com/unslothai/unsloth) - **Gemma3 Model**: [https://huggingface.co/google/gemma-3-270m-it](https://huggingface.co/google/gemma-3-270m-it) - **ChessInstruct Dataset**: [https://huggingface.co/datasets/Thytu/ChessInstruct](https://huggingface.co/datasets/Thytu/ChessInstruct) - **OpenSubtitles Corpus**: [http://opus.nlpl.eu/OpenSubtitles.php](http://opus.nlpl.eu/OpenSubtitles.php) ## πŸ“„ Citation If you use this dataset, please cite the original OpenSubtitles corpus: ```bibtex @inproceedings{lison-tiedemann-2016-opensubtitles2016, title = "{O}pen{S}ubtitles2016: Extracting Large Parallel Corpora from Movie and {TV} Subtitles", author = "Lison, Pierre and Tiedemann, J{"o}rg", booktitle = "Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}'16)", month = may, year = "2016", address = "Portoro{\v{z}}, Slovenia", publisher = "European Language Resources Association (ELRA)", url = "https://aclanthology.org/L16-1147", pages = "923--929", } ``` ## πŸ“‹ License This dataset is released under **CC-BY-4.0** license, following the OpenSubtitles corpus licensing terms. ## 🀝 Contributing Found an issue or want to improve the dataset? Please open an issue or submit a pull request! --- **Created with ❀️ for the open-source ML community** **Format guaranteed compatible with Unsloth and Gemma3-270m fine-tuning**