--- language: - en - bg license: cc-by-4.0 size_categories: - 10M= 2: # Show first 3 samples break ``` ### For Small Subsets (Non-streaming) ```python # Load a small subset for testing dataset = load_dataset("zantag/en-bg-os-full-50m", split="train[:1000]") print(f"Loaded {len(dataset)} examples") ``` ### Fine-tuning with Unsloth (Memory Efficient) ```python from unsloth import FastLanguageModel from datasets import load_dataset # Load model with 4-bit quantization for memory efficiency model, tokenizer = FastLanguageModel.from_pretrained( model_name="unsloth/gemma-3-270m-it", max_seq_length=2048, load_in_4bit=True, # Essential for large datasets ) # Use streaming dataset dataset = load_dataset("zantag/en-bg-os-full-50m", split="train", streaming=True) # Take a subset for training (adjust size based on your hardware) dataset = dataset.take(100000) # Use 100k examples # Format for training def formatting_prompts_func(examples): tasks = examples["task"] expected_outputs = examples["expected_output"] texts = [] for task, output in zip(tasks, expected_outputs): text = f"user\n{task}\nmodel\n{output}" texts.append(text) return {"text": texts} # Apply formatting (works with streaming) dataset = dataset.map(formatting_prompts_func, batched=True) ``` ## 🎯 Use Cases - **Large-scale machine translation research** - **Training production-quality EN↔BG translation models** - **Cross-lingual understanding experiments** - **Domain adaptation studies across entertainment content** - **Comparative analysis with other translation datasets** - **Building robust multilingual applications** ## πŸ” Data Quality - βœ… **Complete OpenSubtitles Corpus**: All available BG-EN parallel data - βœ… **Quality Filtering**: Cleaned and preprocessed for optimal training - βœ… **ChessInstruct Format**: 100% compatible with Unsloth framework - βœ… **Memory Optimized**: Supports streaming for large-scale training - βœ… **Proper Encoding**: UTF-8 with correct Cyrillic character handling ## πŸ› οΈ Technical Recommendations ### Memory Management For this large dataset, consider these strategies: ```python # 1. Use streaming mode dataset = load_dataset("zantag/en-bg-os-full-50m", streaming=True) # 2. Process in chunks def process_in_chunks(dataset, chunk_size=10000): chunk = [] for example in dataset: chunk.append(example) if len(chunk) >= chunk_size: yield chunk chunk = [] if chunk: # Don't forget the last chunk yield chunk # 3. Use gradient accumulation for training trainer = Trainer( # ... other args per_device_train_batch_size=1, # Small batch size gradient_accumulation_steps=32, # Accumulate gradients dataloader_pin_memory=False, # Reduce memory usage ) ``` ### Hardware Requirements **Minimum Requirements:** - RAM: 32GB+ (for streaming mode) - GPU: 16GB+ VRAM (with 4-bit quantization) - Storage: 20GB+ free space **Recommended Setup:** - RAM: 64GB+ - GPU: A100 40GB or H100 - Storage: SSD with 50GB+ free space ## πŸ“ˆ Performance Benchmarks This dataset provides: - **Domain Coverage**: Movies, TV shows, documentaries across all genres - **Linguistic Diversity**: Formal/informal register, technical/colloquial terms - **Cultural Context**: Real-world usage patterns and expressions - **Translation Quality**: Human-translated subtitle pairs ## 🀝 Contributing Found issues or improvements? Please open an issue in the source repository. ## πŸ“„ Citation ```bibtex @dataset{bg_en_opensubtitles_50m_chess, title={Bulgarian-English OpenSubtitles Full Dataset (50M, ChessInstruct Format)}, author={Zantag}, year={2024}, url={https://huggingface.co/datasets/zantag/en-bg-os-full-50m}, note={Complete OpenSubtitles corpus in ChessInstruct format for Gemma3-270m fine-tuning} } @inproceedings{lison-tiedemann-2016-opensubtitles2016, title = "{O}pen{S}ubtitles2016: Extracting Large Parallel Corpora from Movie and {TV} Subtitles", author = "Lison, Pierre and Tiedemann, J{"o}rg", booktitle = "Proceedings of LREC 2016", year = "2016", url = "https://aclanthology.org/L16-1147", } ``` ## πŸ“‹ License CC-BY-4.0 - Compatible with original OpenSubtitles licensing. --- **πŸš€ Ready for large-scale translation model training!** **⚑ Optimized for memory efficiency and Unsloth compatibility**