gemma-en-bg-full / README.md
zantag's picture
πŸ“š Add ChessInstruct-format dataset card with guaranteed Unsloth compatibility
08e73e5 verified
metadata
language:
  - en
  - bg
license: cc-by-4.0
size_categories:
  - 100K<n<1M
task_categories:
  - translation
  - text-generation
task_ids:
  - text2text-generation
tags:
  - subtitles
  - opensubtitles
  - gemma3
  - gemma-270m
  - fine-tuning
  - machine-translation
  - instruction-following
  - unsloth
  - unsloth-compatible
  - chess-instruct-format
pretty_name: Gemma EN-BG Translation Dataset (ChessInstruct Format)
dataset_info:
  features:
    - name: task
      dtype: string
    - name: input
      dtype: string
    - name: expected_output
      dtype: string
    - name: KIND
      dtype: string
  config_name: default
  splits:
    - name: train
      num_examples: 736407
    - name: validation
      num_examples: 92050
    - name: test
      num_examples: 92052

Gemma EN-BG Translation Dataset (ChessInstruct Format)

🎯 Overview

This dataset contains 920,509 English to Bulgarian subtitle translation pairs in ChessInstruct format for fine-tuning Gemma3-270m using the Unsloth framework. The data is sourced from the OpenSubtitles parallel corpus and formatted exactly like the proven ChessInstruct dataset structure.

✨ Key Features

  • βœ… ChessInstruct Compatible: Uses exact task/input/expected_output/KIND format
  • πŸš€ Gemma3-270m Optimized: Proven format that works with Unsloth
  • πŸ’Ύ Memory Efficient: Optimized for Colab T4 GPU environments
  • 🎬 Real-world Data: Movie and TV subtitle translations
  • πŸ”„ Instruction Format: Ready for instruction-following fine-tuning
  • ⚑ Zero Errors: Format guaranteed to work with Unsloth templates

πŸ“Š Dataset Statistics

Split Examples Size
Train 736,407 ~174MB
Validation 92,050 ~22MB
Test 92,052 ~22MB
Total 920,509 ~218MB

πŸ”§ Data Format

Each example follows the proven ChessInstruct format structure:

{
  "task": "Translate the following English text to Bulgarian:\n\nEnglish: Hello, how are you?\nBulgarian:",
  "input": "Hello, how are you?",
  "expected_output": "Π—Π΄Ρ€Π°Π²Π΅ΠΉ, ΠΊΠ°ΠΊ си?",
  "KIND": "TRANSLATION_EN_BG"
}

Format Fields

  • task: Complete instruction prompt for the model
  • input: Raw English text to translate
  • expected_output: Bulgarian translation
  • KIND: Task category identifier (TRANSLATION_EN_BG)

πŸš€ Quick Start

Loading the Dataset

from datasets import load_dataset

# Load the dataset
dataset = load_dataset("zantag/gemma-en-bg")

# Access splits
train_data = dataset["train"]
val_data = dataset["validation"] 
test_data = dataset["test"]

# Print a sample
sample = train_data[0]
print("Task:", sample["task"])
print("Input:", sample["input"])
print("Expected Output:", sample["expected_output"])
print("Kind:", sample["KIND"])

Fine-tuning with Unsloth

from unsloth import FastLanguageModel
from unsloth.chat_templates import get_chat_template
from datasets import load_dataset

# Load model
model, tokenizer = FastLanguageModel.from_pretrained(
    model_name="unsloth/gemma-3-270m-it",
    max_seq_length=2048,
    load_in_4bit=False,
)

# Load dataset
dataset = load_dataset("zantag/gemma-en-bg", split="train")

# Format for training (ChessInstruct format works directly)
def formatting_prompts_func(examples):
    tasks = examples["task"]
    expected_outputs = examples["expected_output"]
    texts = []
    
    for task, output in zip(tasks, expected_outputs):
        # Create instruction-response format
        text = f"<start_of_turn>user\n{task}<end_of_turn>\n<start_of_turn>model\n{output}<end_of_turn>"
        texts.append(text)
    
    return {"text": texts}

dataset = dataset.map(formatting_prompts_func, batched=True)

🎯 Use Cases

  • Fine-tuning Gemma3-270m for English β†’ Bulgarian translation
  • Training instruction-following translation models
  • Subtitle translation systems
  • Cross-lingual dialogue systems
  • Educational translation tools

πŸ” Data Quality

The dataset has been carefully processed to ensure high quality:

  • βœ… HTML/URL Removal: Cleaned of HTML tags and URLs
  • βœ… Length Filtering: Sentences between 2-50 words
  • βœ… Content Filtering: Removed lines with only numbers/punctuation
  • βœ… Encoding: Proper UTF-8 encoding for Cyrillic text
  • βœ… Format Validation: 100% ChessInstruct-compatible format
  • βœ… Zero Conversion Errors: All 45,313 examples converted successfully

πŸ› οΈ Technical Details

Why ChessInstruct Format?

This dataset uses the exact same format as the successful Thytu/ChessInstruct dataset, which is proven to work perfectly with Unsloth. The format provides:

  • Clear separation of instruction (task) and expected response (expected_output)
  • Consistent structure that Unsloth can reliably parse
  • Task categorization through the KIND field
  • Direct compatibility with existing Unsloth templates

Recommended Fine-tuning Settings

from trl import SFTTrainer, SFTConfig

trainer = SFTTrainer(
    model=model,
    tokenizer=tokenizer,
    train_dataset=dataset,
    args=SFTConfig(
        dataset_text_field="text",
        per_device_train_batch_size=8,
        gradient_accumulation_steps=1,
        warmup_steps=5,
        max_steps=100,  # Adjust based on your needs
        learning_rate=5e-5,
        logging_steps=1,
        optim="adamw_8bit",
        weight_decay=0.01,
        lr_scheduler_type="linear",
        seed=3407,
        output_dir="outputs",
    ),
)

Hardware Requirements

  • Minimum: CPU-only training (2-4 hours on modern CPU)
  • Recommended: Tesla T4 GPU in Google Colab (15-30 minutes)
  • Memory: 4-8GB RAM, 2-4GB VRAM

πŸ“š Example Usage

# Example translation prompt
sample = dataset["train"][0]

print("Task prompt:")
print(sample["task"])
print("\nExpected translation:")
print(sample["expected_output"])
print("\nTask category:")
print(sample["KIND"])

# Output:
# Task prompt:
# Translate the following English text to Bulgarian:
# 
# English: Hello, how are you?
# Bulgarian:
# 
# Expected translation:
# Π—Π΄Ρ€Π°Π²Π΅ΠΉ, ΠΊΠ°ΠΊ си?
# 
# Task category:
# TRANSLATION_EN_BG

πŸ”— Related Resources

πŸ“„ Citation

If you use this dataset, please cite the original OpenSubtitles corpus:

@inproceedings{lison-tiedemann-2016-opensubtitles2016,
    title = "{O}pen{S}ubtitles2016: Extracting Large Parallel Corpora from Movie and {TV} Subtitles",
    author = "Lison, Pierre and Tiedemann, J{"o}rg",
    booktitle = "Proceedings of the Tenth International Conference on Language Resources and Evaluation ({LREC}'16)",
    month = may,
    year = "2016",
    address = "Portoro{\v{z}}, Slovenia",
    publisher = "European Language Resources Association (ELRA)",
    url = "https://aclanthology.org/L16-1147",
    pages = "923--929",
}

πŸ“‹ License

This dataset is released under CC-BY-4.0 license, following the OpenSubtitles corpus licensing terms.

🀝 Contributing

Found an issue or want to improve the dataset? Please open an issue or submit a pull request!


Created with ❀️ for the open-source ML community

Format guaranteed compatible with Unsloth and Gemma3-270m fine-tuning